Share this article on:

Aspiring for New Heights From a Sound Foundation

Wolfe, Jace PhD; Smith, Joanna MS

doi: 10.1097/01.HJ.0000513100.41721.e4
Tot 10

Dr. Wolfe, left, is the director of audiology at Hearts for Hearing and an adjunct assistant professor at the University of Oklahoma Health Sciences Center and Salus University. Ms. Smith, right, is a founder and the executive director of Hearts for Hearing in Oklahoma City.

Every architect knows it is imperative to start with a solid foundation to build a structure that will stand the test of time. A solid foundation is particularly essential when one aspires to reach lofty heights. With the importance of a solid foundation in mind, Richard Seewald, PhD, and the late Judy Gravel, PhD, launched the pediatric audiology conference, A Sound Foundation Through Early Amplification, in 1998. The seventh conference, held in Atlanta on Oct. 2-5, 2016, featured speakers who shared state-of-the-art, evidence-based updates on pediatric hearing health care, including diagnostic audiology, hearing aid technology, implantable hearing technology, and habilitative services. Although the event is sponsored by a hearing aid manufacturer (Phonak), the presentations were largely free from commercial influence and well-grounded in cutting-edge science and research applicable to clinical practice. Sound Foundation is a can't-miss event for every hearing health care professional who routinely provides services to children with hearing loss. Here's a glimpse of some conference highlights. Kudos to Anne Marie Tharpe, PhD, and Marlene Bagatto, AuD, PhD, for co-chairing an excellent meeting.

10. Chirping about new ABR technology: Yvonne Sininger, PhD, addressed the merits of CE-Chirp stimuli. The CE-Chirp stimulus reorganizes the timing of spectral stimulation to synchronize the response of the cochlea. In other words, the low-frequency components of a broadband stimulus are presented prior to the high-frequency components within the stimulus.

Sininger likened this process to a race in which the slow runners are given a head start so that everyone will cross the finish line at the same time. Likewise, the low-frequency components of a broadband signal are first introduced, followed by the mid- then high-frequency. The result is simultaneous and abrupt stimulation of the entire basilar membrane, and in turn, a larger amplitude response relative to that evoked by a click, which primarily elicits a response from the basal end of the cochlea.

Narrowband CE-Chirps are also available. For these stimuli, the spectra energy across a narrow frequency range is staggered to produce a synchronous response within the narrow location of the basilar membrane corresponding to the stimulus frequency range. CE-Chirp technology may also be used to elicit the auditory steady state response (ASSR).

Sininger described several studies that examined CE-Chirp stimuli and their advantages over click and tone burst stimuli. CE-Chirp stimuli potentially elicit larger amplitude responses, enable threshold responses at lower levels (typically about 5 dB lower than tone burst stimuli), and allow faster completion of a conventional, threshold-search ABR and/or ASSR.

Of note, there are no formal standards to guide the calibration of CE-Chirp stimuli levels. Additional research is needed to determine the correction factors necessary to estimate behavioral threshold from CE-Chirp thresholds in infants and young children with varying degrees of hearing loss. Furthermore, additional study is required to further understand the characteristics of typical responses to bone conduction CE-Chirp stimuli. Fortunately, several scientists and clinicians are actively conducting studies to address these outstanding items. Based on Sininger's presentation, it is likely that the CE-Chirp will become the stimulus of choice for ABR and ASSR assessment in children.

9. Back to the ASSR: Speaking on the same theme of electrophysiologic assessment for threshold estimation, Susan Small, PhD, examined the clinical application of ASSR assessment. Small discussed a number of recent studies that explored the use of ASSR to predict air and bone conduction thresholds in infants. To briefly summarize, she noted that ASSR thresholds in normal-hearing infants are typically obtained at stimulus levels no greater than 50 dB HL at 500, 45 dB HL at 1,000, 40 dB HL at 2,000, and 40 dB HL at 4,000 Hz to ASSR air conduction stimuli at 500, 1,000, 2,000, and 4,000 Hz, respectively. She suggested that the ASSR may be used as a measure to expeditiously confirm normal hearing sensitivity by simultaneously presenting ASSR stimuli at four different frequencies (500, 1,000, 2,000, and 4,000 Hz) to each ear at the aforementioned maximum normal-hearing levels (50, 45, 40, and 40 dB HL). If a response is obtained, the clinician may surmise that the infant has normal hearing sensitivity. However, if there is no response to one or more test signals, then the clinician will complete a threshold search using tone burst ABR.

Small also proposed using evidence-based maximum ASSR bone conduction levels for infants with normal hearing. However, she cautioned against the use of ASSR to confirm normal bone conduction hearing sensitivity; additional research is needed to establish ASSR frequency-specific correction factors to estimate air and bone conduction stimuli, as well as to confirm maximum bone conduction ASSR response levels in verifying normal bone conduction hearing sensitivity. She noted that tone burst ABR continues to be the gold standard in threshold estimation for children with hearing loss.

8. Beyond the Brainstem: Kevin Munro, PhD, moved up a couple of levels in the auditory pathway to discuss his team's recent research exploring the use of cortical auditory evoked potential (CAEP) assessment in the management of children with hearing loss. They are completing measurements with the NAL/Frye Electronics HearLab system. The HearLab system, designed specifically for CAEP assessment in infants, allows for the acquisition of CAEP responses to four different speech sounds, /m/, /g/, /t/, and /s/, and has features that make it attractive for clinical use. For example, it allows clinicians to accurately calibrate the level of different speech signals during sound field presentation, allows for determination of the residual EEG noise level, and notably uses statistical analysis in automated CAEP response detection. These features simplify the process of clinical implementation of a measure that historically has been somewhat complex to complete in the clinic.

Munro and his team completed CAEP assessments of 104 normal-hearing infants, who all presented a response to at least one speech stimuli. Work is ongoing to measure the CAEP of 200 infants with hearing loss. Munro's team is pursuing this goal using the HearLab system and an innovative mobile audiology van.

Munro pointed out several reasons why CAEP should be included in the battery of measures used to evaluate children with hearing loss. He mentioned several situations in which ABR results may be equivocal (e.g., severe sensory hearing loss, middle ear dysfunction, etc.), and noted that ABR does not estimate hearing sensitivity in children with auditory neuropathy spectrum disorder (ANSD). In fact, Teresa Ching, PhD, and colleagues at NAL have shown the value of CAEP in determining management decisions for children with ANSD. Finally, Munro noted that present CAEP responses provide reassurance to families that children with hearing loss can hear with their hearing aids or cochlear implants, and that CAEP may be used to determine if a hearing device has been appropriately selected and fitted.

7. Give me your tired masses: Ben Hornsby, PhD, shared recent information from his team's research on fatigue in children with hearing loss. Hornsby reported higher levels of fatigue in children with mild to severe hearing loss compared with children with normal hearing and those with other chronic health conditions. He also noted the presence of higher-level physiological indicators of fatigue in children with hearing loss. Specifically, children with hearing loss show elevated cortisol levels that appear to increase with age, which is consistent with long-term exposure to stress.

Hornsby also shared some valuable advice on managing fatigue in children with hearing loss. He noted that clinicians should be on the lookout for signs of fatigue, including tiredness, sleepiness in the morning, inattentiveness and distractibility, mood changes, changes in classroom contributions, and difficulty with following instructions.

Finally, he urged clinicians to help families, educators, and students recognize the signs of fatigue, and promote stress-management strategies, including relaxation and regular exercise.

6. Beyond the test booth: Cheryl DeConde Johnson, EdD, gave a thought-provoking presentation on the importance of facilitating self-determination in children with hearing loss. She described self-determination as “the attitudes and abilities required to act as the primary causal agent in one's life and to make choices regarding one's actions free from undue external influence or interference.” Self-determination involves a person acting “in ways that make positive use of knowledge and understanding about his or her own characteristics, strengths, and limitations.” She noted that children with strong self-determination skills have a greater chance of becoming independent and successful adults. A self-determined person more effectively solves problems, understands what supports are required for success, sets goals, and makes decisions based on his/her personal resources.

As she has done so adeptly throughout her career, DeConde Johnson reminded us that effective intervention for hearing loss in children extends far beyond matching the output of a hearing aid to prescriptive targets. Self-determination in children with hearing loss may be promoted by helping the child understand his or her hearing loss and its effects on social interactions. Self-determination may also be promoted by including the child in decision-making activities such as choosing a hearing device. The clinician should work with the child's family and educators to ensure that they support the child's needs to pursue autonomy and success in real-world situations. The child's clinician and immediate community should provide assistance with developing the child's skills in problem solving, goal-setting, and planning, as well as in fostering self-esteem.

Of note, DeConde Johnson reported that one of the most important components of developing self-determination in children with hearing loss is the opportunity for the child to interact with other children with hearing loss. These opportunities may be few for many children with hearing loss because they typically attend classes with children with normal hearing. Pediatric audiology programs should attempt to develop support groups in which children with hearing loss can meet regularly.

5. Audibility is king: Mary Pat Moeller, PhD, discussed the clinically relevant findings of the NIH-funded, multi-center (Boys Town, University of North Carolina, and University of Iowa) study of children with mild to severe hearing loss. Her presentation was framed within the acronym, ACCESS, which encapsulates the factors that influence the outcomes of children who are hard of hearing:

A: Audibility is optimized.

C: Carefully fit and closely monitored devices.

C: Consistently worn devices from early infancy.

E: Environment conducive to language learning.

S: Selected at-risk areas of language are a focus.

S: Service provision is optimized.

Moeller presented data that showed greater language outcomes (by two-thirds of a standard deviation) for children with hearing aids and produced output in the upper quartile of aided speech intelligibility index (SII), relative to children in the lower quartile.

Along those same lines, Moeller explained that the aided SII of most children in the study was not optimized because their hearing aid output levels were not matched to the DSL 5.0 targets for children. Audibility and carefully fitted devices matter.

Datalogging results also showed significantly better outcomes for children who wore their hearing aids for longer than 10 hours a day. Consistency matters.

Better outcomes were also achieved by children whose families provided a rich and robust model of spoken language. Environment and auditory verbal therapy matter.

Moeller also reported that children who are hard of hearing are at risk from delays in morphology (e.g., word endings such as the plural /s/ in the word-final position). The attention of clinicians to at-risk areas matters, and service optimization matters. For more about Moeller's work, see Ear Hear. 2015;36 Suppl 1:1S http://bit.ly/2jE6vLp.

4. Allowing the king to reign: Marlene Bagatto, AuD, PhD, and Susan Scollie, PhD, both of whom hail from the birthplace of contemporary pediatric hearing aid verification, delivered presentations on the verification of modern hearing aid technology for children. Bagatto reminded us of the importance of measuring the real-ear-to-coupler difference (RECD), not only in estimating the hearing aid output in the ear canal, but also in accurately calculating a child's hearing threshold in dB SPL in the eardrum. She briefly reviewed the implications of the recent changes in the ANSI Standard for Real Ear Measurement (ANSI S3.46-2013) on RECD and hearing aid verification for infants and children. Her discussion on contemporary RECD practices is beyond the scope of this column. However, pediatric audiologists who fit hearing aids on children should be fully aware of this important information. Bagatto also encouraged attendees to take note of the SII obtained in pediatric fittings and to compare individual SIIs with normative values for a similar degree of hearing loss.

Scollie focused on the verification of frequency-lowering technology in modern hearing aids. She described an evidence-based protocol for real ear verification of frequency-lowering technology. The protocol involves the presentation of calibrated speech sounds to determine the audibility of high-frequency speech sounds with and without frequency-lowering technology, as well as the spectral alteration that frequency-lowering technology causes for both vowels and high-frequency speech sounds. Again, a full description of this protocol is beyond the scope of this article but a detailed discussion can be found online http://bit.ly/2jEjrkf.

3. Good, good, good, good vibrations: Bill Hodgetts, PhD, addressed the prescription and verification of bone conduction devices. He noted that there are evidence-based fitting protocols for providing children with air conduction hearing aids. However, most clinicians are not yet equipped with evidence-based, standardized methods for fitting and verifying bone conduction devices. He discussed the possibility of using skull simulators to measure the output level of bone conduction devices, but noted a 2015 survey by Gordey and Bagatto which revealed that less than 15 percent of clinics possess skull simulators–a limitation that must change. Hodgetts described DSL fitting targets for output force levels of bone conduction devices measured using a skull simulator. Along with Scollie and colleagues, he is working toward developing a DSL prescriptive fitting method to verify bone conduction devices using skull simulators coupled with clinical real ear measurement systems. Hodgetts also shared study results indicating better speech recognition in quiet and in noise when bone conduction devices are fitted through an evidence-based fitting method.

2. When a cochlear implant is not the answer: Craig Buchman, MD, shared an update on his research on auditory brainstem implants (ABI). He noted that a young child will most likely be considered for an ABI if he or she is diagnosed with absence of a cochlear nerve. Buchman was careful to mention that children diagnosed with cochlear nerve deficiency should first undergo cochlear implantation to confirm that no benefit may be obtained from a cochlear implant.

As expected, Buchman reported that ABI outcomes are likely to be much poorer than the typical outcomes children get from cochlear implants. None of his five initial ABI recipients have developed open-set speech recognition. However, they all have sound detection/awareness and are vocalizing with varying degrees of success. Four of his pediatric recipients have developed some level of closed-set speech recognition, and three communicate through Total Communication. Also of great importance, Buchman reported that none of his patients have experienced medical complications, so ABI intervention appears to be safe when administered by an experienced and qualified team of interdisciplinary professionals. Finally, he noted that any child who receives an ABI should continue to be exposed to sign language, as it is highly likely that manual communication will be the child's primary means of communication.

1. Not your father's screening: Cynthia Morton, PhD, explored the benefits of genetic assessment in the management of children with hearing loss. She reported that there are 123 genes known to cause hearing loss, and briefly described Harvard's OtoGenome—a panel that analyzes a child's blood sample to search for 87 different gene mutations associated with hearing loss. OtoGenome has identified a genetic cause for hearing loss in 23 percent of examined cases.

Morton noted that understanding the precise etiology of hereditary hearing loss makes a difference in the treatment and management of a child with hearing loss. She mentioned that certain genetic etiologies, such as connexin 26, are associated with excellent outcomes in cochlear implantation, while other genetic etiologies such as pejvakin mutations may have limited to no benefit from hearing aids.

Furthermore, there are syndromic disorders that cause hearing loss which cannot be distinguished from nonsyndromic disorders at birth. Examples include Alport, branchio-oto-renal, Pendred, Jervell and Lange-Nielsen, and Usher syndromes. These syndromes can place a child's well-being at risk; early identification is critical to ensuring the child's safety and proper development.

Morton made the case that genetic screening for hearing loss should be considered at birth. Several hereditary forms of hearing loss cause progressive hearing losses not identified via traditional forms of newborn hearing screening. Also, cytomegalovirus (CMV), the most common cause of non-hereditary congenital hearing loss, may be identified through early screening. Confirming CMV as the cause of hearing loss is impossible if testing is not done during the newborn stage.

High financial costs and system logistics will likely delay the inclusion of genetic testing as a component of newborn hearing screening programs. However, Morton built a convincing case for why the audiology field should work toward that direction.

Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.