Rarely has a single conference of experts in audiology been able to agree unanimously about a difficult hearing disorder, bringing clarity and focus out of a diffuse collection of information and reports. But the Consensus Conference on the Diagnosis of Auditory Processing Disorders in School-Aged Children held last April in Dallas did it.
Organized by James Jerger, PhD, and Frank Musiek, PhD, the 14-member panel broadly defined “auditory processing disorder,” or APD, as “a deficit in the processing of information that is specific to the auditory modality.” The panel added, “The problem may be exacerbated in unfavorable acoustic environments. It may be associated with difficulties in listening, speech understanding, language development, and learning. In its pure form, however, it is conceptualized as a deficit in the processing of auditory input.”
In its report, the panel discusses the differences between screening tests to identify children who may have APD and thus need referral to an audiologist and diagnostic tests to differentiate APD from other disorders.1
The report also discusses screening by questionnaire and screening by test. A minimal APD test battery is recommended and the advantages and disadvantages of its possible approaches are described: behavioral tests, electrophysiologic and electroacoustic tests, and neuroimaging studies.
In an editorial accompanying publication of the full text of the report, Jerger counsels: “Readers are urged to study it in detail.”2
SEEKING BETTER DX TESTS FOR APD
Auditory processing disorder, also called central auditory processing disorder (CAPD),3 involves a reduced ability to manipulate and use acoustic signals in spite of normal hearing sensitivity. It may sometimes involve an interaction with skills related to language, attention, and cognitive ability. The most common symptoms in patients with APD are difficulties with speech discrimination in any of a number of listening environments.
Audiologists use various tests in the diagnosis of APD, including dichotic listening tests, the SCAN test, the SSW (staggered spondaic word) test, pitch patterns, low-redundancy speech tests, and gap-detection tests. They may also have electrophysiologic measures available for diagnosis, including the ABR (auditory brainstem response), MLR (middle latency response), and MMN (mismatched negativity). Although the tests do provide some information, they also have inherent limitations that should be kept in mind, according to Deborah W. Moncrieff, PhD, assistant professor, Department of Communication Sciences and Disorders, University of Florida, Gainesville.
Moncrieff notes, “The biggest problem with our current diagnostic methods is that we use speech for most of our behavioral testing. This leaves us unsure whether our patients have a specific auditory perceptual deficit or a language deficit. We need to develop tests that focus on auditory perceptual skills and dissociate perception from language.”
A battery of tests based on auditory perceptual skills alone would enable audiologists to better isolate a certain type of APD and to rule out others, depending on the presenting complaint and findings from a cross check of one test to another, according to Moncrieff. Proceeding from the broad to the specific, a flow chart of tests would help direct the audiologist to one diagnosis for a patient who hears well only in quiet, for example, and to another diagnosis for a patient who has difficulty understanding speech both in noise and in quiet.
Moncrieff and James W. Hall III, PhD, clinical professor of audiology at the University of Florida, are currently developing such a flow chart for APD testing. They have also begun to work with the Veterans Administration Brain Rehabilitation Center at the Gainesville VA Hospital where research is being conducted with adult neurogenic patients. Moncrieff reports, “Some patients seem to do well with therapy and others do not, suggesting that, in some cases, an auditory processing deficit may interfere with the effectiveness of a therapy program. As a group of researchers, we are developing a brain model/diagnostic model that will attempt to integrate specific areas of the brain with diagnostic tools and remediation strategies. It's just a beginning at this point with many holes in it yet to be filled.”
Moncrieff adds, “For now, the consensus statement by Jerger and Musiek is a very good place to start at diagnostics for APD.”
Diagnosing and treating APD in children
Moncrieff and colleagues at the University of Florida are also working at “norming” a number of auditory processing tests in children. This work is important because norms for a number of tests are available only down to age 8 years. For some measures, adult-like behavior is presumed to begin at age 11 years, and this leads to another weakness. If one looks at the SCAN, which has been well normed on adults and children, the scores for an 11-year-old are very different on some of the tests from the scores for adults, leaving a fairly large gap between normal results. For example, a 12-year-old child scored as an adult may look very different than if he or she were scored as an old 11-year-old child. The researchers will attempt to address this and similar situations in testing and scoring.
The researchers are also looking at children who have abnormally large interaural asymmetries: Their right ears perform at normal or near-normal levels, but their left ears are significantly poorer when the two ears compete. Interaural asymmetry appears to be an important red flag for auditory processing problems; however, why it occurs and what it means are still not well understood.
Moncrieff says, “The lack of specificity in available diagnostic tests for APD in children results in uncertainty regarding remediation. First of all, any abnormal result such as acoustic reflex abnormalities or poor word-recognition ability may suggest a retrocochlear disorder and require referral to a neurologist. We also make appropriate referrals to speech-language pathologists who may find there is benefit from interactive remediation programs like the Lindamood-Bell, Earobics, and FastForWord programs.
“We have to tell parents that remediation may not help, but it is not likely to harm the child in any way. For many parents, we typically recommend more one-on-one instruction, reduction of distractions, FM systems if there is an auditory figure ground problem, and the expectation that learning may take a child with APD longer. But at least, we are empowering patents to understand that there is a problem, that they can advocate on their child's behalf, and that we are working through research goals to better diagnose and remediate it.”
Research expected to help hearing aid wearers
Hearing-impaired patients and hearing aid wearers are likely to benefit greatly from research in APD. Moncrieff says, “Many hearing aid wearers are pleased with the increased level of signals produced through their hearing aids, but they are often unable to correctly process speech information available from them. This common complaint suggests that a basic auditory processing deficit may be involved in many such cases. Proper diagnosis and remediation would improve their hearing and quality of life significantly. This is a huge area that needs to be investigated.”
Moncrieff concludes, “Too many audiologists shy away from APD, perhaps because of its present uncertainties and difficulties. I personally believe they are wrong to do so. In fact, I believe the future of audiology lies in work and discovery in connection with APD.
“The future of audiology as a doctoral-level profession would also be enhanced by as many students as possible extending their education, earning PhD degrees, and working in both clinical and research settings. Clinical and basic research hybrids are desperately needed in audiology. It's a daunting task to be doing clinical work and research and teaching all at the same time, but it's a really important part of what we do, now and for the future of the profession.”
ENHANCED BRAIN MAPPING VIA EEG
An ambitious program to extend the uses of electroencephalography (EEG) in auditory diagnostics, research, and teaching is under way at Texas Tech University Health Sciences Center in Lubbock. The new Center for Functional Brain Mapping and Cortical Studies will provide more speed and sensitivity of imaging than does standard EEG imaging, according to D. Dwayne Paschall, PhD, director of the audiology program at Texas Tech.
Aspects of auditory processing will be included in many of the projects that will be assigned to the new system. In one mode, a patient's electroencephalogram can be overlaid on a CT (computed tomography) or MRI (magnetic resonance imaging) scan of the brain taken at the same time. This coincident imaging enables audiologists to plot the electrical activity occurring in the brain against the anatomical topography of the patient's own brain. This technique provides more information about the brain's electrical activity than can be obtained from the EEG alone.
Paschall says, “We are creating facilities that can show from millisecond to millisecond how and where the brain is responding to stimuli like different sounds or thoughts or language. This has certain advantages even over functional MRI or PET scanning in which blood flow over time is recorded. The brain creates a lot of activity in a minute or two or even a second or two. At the same time, we can record a series of the brief readings of our system, then animate them and show how electrical activity is changing in the brain over time.”
The system also enables researchers to compute the relationship of activity in one area of the brain with that in another area. For example, this type of correlation of various activity patterns might provide insight into the nature of noise trauma or ototoxic drugs and their gradually accumulating damaging effects on hearing.
Overcoming current test limitations
The Texas Tech system is broad enough to overcome some standard diagnostic test limitations. For example, central auditory processing tests are not always available for very young children. However, the system's ability to measure cortical responses, such as recognizing discrimination in sounds, can provide useful auditory processing diagnostics in these children that might not otherwise be available. The system is also flexible and finely tunable so that operators can introduce needed refinements to standard parameters. For example, the electrodes can be placed on non-standard positions on the scalp, the data redigitized, and a better image produced.
The system may be applied in new or modified ways to use evoked potentials to study very young patients (or others who may be non-speaking) with suspected CAPD by measuring their responses to speech stimuli.
Paschall gives this example: “We can record responses of the brain to a listener's categorization of speech sounds. If we present ‘ee-ee-ee-ah-ee-ee-ee’ to the listener via headphones we can then determine if the listener recognized the ‘ah’ in the series, in effect, determining if the listener ‘heard’ there was a different sound being presented. Also, if the listener has difficulty hearing the difference between ‘hit’ and ‘head,’ we can derive certain information about the deficit causing the difficulty. While numerous audiologic techniques use these or similar sound tests, enhanced EEG will allow us to track the brain's activity at the time. We also are working on protocols that could potentially help distinguish between deficits in attention and deficits in auditory processing—a long-standing and difficult problem in audiology.
Paschall believes the center will eventually find valuable information about brain activity from studies as diverse as looking at the brains of stutterers vs. non-stutterers; the “surprise” evoked when listeners hear a wrong note in a musical melody; and how hearing-impaired individuals fill in gaps of missing information lost to a hearing deficit. Information from such studies may well have application in better processors for digital hearing aids and in training new wearers of cochlear implants.
NEW FINDINGS IN NEWBORN SCREENING
A hallmark of universal newborn hearing screening (UNHS) is the rapidity with which these programs are proliferating. At last report, 34 states had either mandatory or voluntary universal hearing screening in newborns. Compare that with March 1993, when the NIH Consensus Development Conference recommended that all babies be screened before hospital discharge. At that time, only Hawaii and Rhode Island had passed legislation requiring hearing screening for all babies born in the state.4
In November 2000, this journal published a special issue on UNHS, guest-edited by Judith S. Gravel, PhD.5 It detailed the history, benefits, implementation, and issues regarding newborn hearing screening. However, in the short time since then, new reports have added to knowledge about this important subject.
Value of early Dx underscored
The younger that deaf and hearing-impaired children are identified and receive cochlear implants, the better they do on speech-recognition tests later in life, according to researchers at the University of Michigan Health System. The positive effect of early implantation is evident even in comparisons between younger children and older children who have had their implants for the same length of time. Despite the older children's maturity advantage, the younger, earlier implanted children do better.
The researchers found a significant difference in speech recognition between those who got their implants between the ages of 2 years and 4 years, the critical period for language development, and those who received them later, according to Paul Kileny, PhD, lead author of the study published in Otology and Neurotology.6
In an interview, Kileny told The Hearing Journal, “We found that the longer children had had their implants, the better they did, though the effect was still largest in those who were identified and implanted earliest.”
The researchers looked at test results from 101 children who received the same model of cochlear implants between the ages of 2 and 14 years. The children were divided roughly in half to allow for two analyses that could isolate the effect of age at implantation on speech production. One group of 48 children had their speech-recognition skills tested when they turned 7 years old, regardless of when they got their implants. The other group of 53 children of various ages was tested at 3 years after implantation to isolate the effects of age at implantation. All children took a battery of standard tests to measure their ability to recognize sounds, words, and sentences. Overall, the results showed a strong combined effect of age at implantation and the length of time the children had had their implants.
Additionally, the researchers grouped the children in each arm of the study into four subgroups. Children tested at 3 years post-implant were divided according to age at implantation, and those tested at age 7 years were grouped according to time since implantation. The differences between the groups were clearest and most statistically significant in the 7-year-olds who had had their implants for 4 or more years and in children whose 3 years of implantation had begun between the ages of 2 years and 4 years. But even those 7-year-olds who had had their implants for only 3 years scored significantly better than those who had had them 1 or 2 years. Also, children who were implanted between the ages of 5 and 7 did better than those who got implants between the ages of 10 and 13.
Kileny says, “The Food and Drug Administration recently approved the Nucleus Contour for implantation at 12 months of age and the Advanced Bionics Clarion should be approved for that age soon. We are moving the boundaries downward to 12 months of age. In 2 years we should have data for a comparison between implantation at 12 months of age versus implantation at 2 years to 4 years. I'm convinced that we will be able to show advantage even at the low end of the age scale. Clearly, that shows again the critical need for identification, diagnosis, hearing aid trial, and appropriate implantation as early as possible.”
When costs count in screening choices
Hospitals that are preparing to implement newborn hearing screening programs face numerous variables in their decisions. In most cases, models from other institutions are difficult to apply directly. In an effort to provide data from a wide range of operational screening programs, researchers at the University of Washington Health Sciences Center, Seattle, and the National Center for Hearing Assessment and Management, Logan, UT, created a decision-analysis model to estimate the cost and cost-effectiveness of newborn hearing screening methods.
Eric J. Kezirian, MD, MPH, and colleagues presented data from the model at the 2000 annual meeting of the American Academy of Otolaryngology—Head and Neck Surgery.7 In an interview, Kezirian explained, “The most significant barrier to implementation of many universal newborn hearing screening programs is cost. Our study compared the most common protocols currently in use in order to assist program directors in their choice of screening protocol.”
The model was constructed to include four protocols that represent 90% of screening programs in pediatric care facilities. The data came from a variety of sources, the most important being 135 operational screening programs. Some data estimates were derived from population-based studies of infants. Overall, study findings were drawn from information from thousands of cases.
Studied from the perspective of the hospital setting, protocols were compared using the variables of cost, screening test sensitivity, and screening test specificity. Cost was determined as the total cost per infant screened, from the initial test to and including diagnostic evaluation (if required). Effectiveness was defined as the number of infants with hearing loss identified specifically through the screening program. Cost-effectiveness was determined to be a ratio of the two. The sole outcome measure was the number of infants with hearing loss identified through newborn screening.
Kezirian described the study's findings as follows: Otoacoustic emissions testing at birth followed by repeat testing at follow-up (OAE/OAE) demonstrated the lowest cost ($13/infant) and lowest cost-effectiveness ratio ($5100/infant with hearing loss identified). Screening auditory brainstem response testing at birth with no screening test at follow-up (S-ABR/none) was the only protocol with greater effectiveness, but it also was associated with the highest cost ($25/infant) and highest cost-effectiveness ratio ($9500/infant with hearing loss identified).
Kezirian concludes, “Some hospitals don't have much, if any, funding allocated for newborn hearing screening. Our study was intended to give hospital administrators in those cases some comparative financial data to consider. The OAE/OAE protocol would be selected because of its lowest cost and lowest cost-effectiveness ratio. To identify the largest fraction of infants with hearing loss, program directors might well choose the S-ABR/none protocol even with its higher associated cost.”
Editor's note: The prevailing view of those involved with newborn hearing screening is that a two-stage screening approach should be used, combining OAE and ABR. For more on the topic, see Gorga et al., JAAA, Vol. 12, No. 2 (February 2001), pages 101–112.
PREDICTING VULNERABILITY TO TRAUMA
Researchers at Harvard Medical School (HMS) and Eaton-Peabody Laboratory, Massachusetts Eye and Ear Infirmary, Boston, have described a non-invasive assay of olivocochlear (OC) reflex strength that may be used to predict vulnerability to acoustic injury.
Reporting in The Journal of Neuroscience, M. Charles Liberman, MD, HMS professor of otology and laryngology, and colleagues note that noise-induced hearing loss is highly variable: Some individuals have “tough” ears whereas others have “tender” ears.8 The assay measures the strength of a sound-evoked neuronal feedback pathway to the inner ear, the OC efferents, by examining otoacoustic emissions created by the normal ear, which can be measured with a microphone in the external ear.
In the guinea pig model, the researchers found that reflex strength was inversely correlated with the degree of hearing loss after subsequent noise exposure. The data suggest that one function of the OC efferent system is to protect the ear from acoustic injury. According to the researchers, this assay, or a simple modification of it, could be applied to human populations to screen for individuals most at risk in noisy environments.
In an e-mail exchange with the Journal, the researchers cited EPA statistics showing that more than 9 million American workers have daily job-related sound exposures in excess of 85 dB, i.e., in a potentially hazardous range. In light of these statistics, the team is developing a study to test the reflex system in construction workers.
The authors conclude their report with these comments: “Regardless of the mechanisms underlying OC-mediated protection, the correlation between medial OC (MOC) reflex strength and vulnerability provides a powerful non-invasive screen for individuals with ‘tough’ versus ‘tender’ ears. MOC reflex strength can be measured in human subjects, based on OAE suppression by contralateral sounds. Furthermore, MOC reflex varies among human subjects. Thus, an OAE-based test should also work in human populations. Although there are likely to be a variety of other risk factors in determining the vulnerability to acoustic injury, the present results suggest that OC reflex strength may be the single most important indicator. If true, the ability to identify those most at risk for noise-induced hearing impairment provides a strategy for reducing future injury and compensation claims in the population at large.”
The day may come, perhaps sooner than expected, when the tools and techniques of auditory diagnostics operate in a new dimension. Researchers at NASA's Jet Propulsion Laboratory in Pasadena, CA, reported in December at an Acoustical Society of America meeting that they had created nano-level microphones “that resemble the microscopic, supersensitive stereocilia of the ear.” These highly ordered arrays of carbon nanotubes are only a few atoms in width, but they respond to sound more sensitively than the ear itself.
The researchers told Science News, “Nanoscale acoustic sensors… might one day take a voyage through the body…or improve hearing aids.”10
BERLIN: FOUR TESTS ARE MANDATORY—AND SO IS THE ORDER
Charles I. Berlin, PhD, decries the belief of some hearing professionals that the audiogram is always the gold standard test of auditory function. He recently told HJ that there is a better way of evaluating new patients than focusing on “air, bone, and speech.” Berlin is professor of hearing science and professor, otolaryngology head and neck surgery, and director, Kresge Hearing Research Laboratory of the South, Louisiana State University Health Sciences Center, New Orleans.
Berlin recommends that audiologists begin their testing in the following order before they go to behavioral audiometry or the pure-tone audiogram:
- ❖ tympanometry
- ❖ middle ear muscle reflexes (both ipsilateral and contralateral)
- ❖ otoacoustic emissions
- ❖ speech-reception threshold
Berlin notes that five electrophysiologic events occur in the cochlea and are recordable from mammals, four of which are also recordable from living human beings. The five events are: endocochlear potential (EP); cochlear microphonic, or hair cell potential; compound action potential; summating potential; and otoacoustic emissions. Different combinations of dysfunction in these events can lead to similar-looking pure-tone audiograms yet require vastly different management.
Why the special test order? Berlin explains, “Tympanometry must be done first to rule out middle ear obstructions, which cloud the interpretation of emission and reflex abnormalities. If tympanometry is normal, we look for middle ear muscle reflexes and emissions both to be normal. If there are both reflexes and emissions present, we know that the EP is present, that there should be cochlear microphonics and compound action potentials present, and we expect enough synchrony to generate a middle ear muscle reflex. This usually suggests an intact auditory nerve. Then, if the audiogram suggests a hearing loss, hearing aids can be considered as a rational treatment since we now know the auditory nerve is synchronously discharging and will respond to low-level amplified sounds.”
Berlin continues, “However, if tympanometry is normal, reflexes are absent, and emissions are present, then we are likely to have someone with normal outer hair cells and normal EP but poor neural synchrony. These patients, surprisingly, can have audiograms ranging from normal to ‘anacusis,’ but still have otoacoustic emissions. Hearing aids may increase sensitivity (and ‘improve the audiogram’), but they will not improve the impaired neural synchrony. These patients are currently called auditory neuropathy (AN) patients, but are better described as patients with auditory dys-synchrony.”
Berlin notes that most diagnostic audiology practitioners see between 10 and 12 sensorineural patients a year per thousand who have vestibular schwannomas (the proper nomenclature for most VIIIth Nerve tumors). In contrast, he says, “We have seen and/or consulted on over 200 AN patients in the past 2 years with absent ABRs and normal emissions. In our practice, this coincidence of events—normal emissions with absent ABRs and absent middle ear muscle reflexes—has occurred many times more frequently than the diagnosis of VIIIth Nerve tumors.”
Berlin continues, “So now, back to the recommended order to properly categorize patients before they even take a pure-tone audiogram or before we even accept a pure-tone audiogram as having any diagnostic or clinical value. Tympanometry first, because as long as it is normal we should see emissions and reflexes. Reflexes second, because if they are present, we have ruled out AN, and the presence or absence of emissions can now be properly interpreted. Then, if the emissions are present, we know that the EP, outer hair cells, and the middle ear should all be normal and the patient's voluntary audiogram should be at or near normal. Conversely, if the reflexes are absent and the emissions normal, we have an auditory dys-synchrony and know to interpret the audiogram with great care and ignore it as an index of hearing aid utility.”
Berlin concludes, “Finally, if the tympanometry is abnormal, and we do not see reflexes or emissions, we know that the true status of the inner ear is still unclear and other tests and/or medical attention will be needed to unravel this problem. If you start with pure-tone audiometry and then have to deal with the ‘mysteries' that follow, you may either neglect to go any further than a normal pure-tone and speech result, thus missing a powerful set of diagnostic options, or you may erroneously pronounce as ‘normal’ a patient who has serious auditory dys-synchrony that is invisible to a pure-tone audiogram.”
The LSU Medical School web site contains additional information about this discussion (“Fit the physiology not the audiogram”), as well as four articles on identifying, diagnosing, and managing AN patients.9