Skip Navigation LinksHome > Blogs > Breaking News
Breaking News
Read breaking news and articles published ahead of print. Tell us what you think and comment on your colleagues’ views about Hearing Journal articles
Tuesday, July 08, 2014
Using a digital sound level meter, Sonova audiologist Thiago Diniz recorded fans' noise levels during the World Cup quarterfinals.
 
By Alissa Katz
 
What does victory sound like? During the World Cup quarterfinals, fans of Brazil’s team registered as the noisiest—their collective cheers reached 116 dB—leading the Hear the World Foundation to predict that Brazil will be the world champion.

The Hear the World Foundation, a hearing loss awareness initiative by manufacturer Sonova, used a digital sound level meter to determine noise level in a large public fan zone in the Vila Madalena neighborhood of São Paulo, Brazil, the host country of the 2014 World Cup.

When the decibel level spiked above 90, Sonova audiologist Thiago Diniz recorded the level, for which team the crowd was cheering, and the length of time for which the noise level was sustained. At the end of every game, the average decibel level of each team’s fans was calculated by adding the decibel levels at each spike and dividing that number by the total number of spikes.

Here are the noise levels of fans cheering the other seven quarterfinal teams:
  • France, 99 dB.
  • Colombia, 97 dB.
  • Argentina, 95 dB.
  • Netherlands, 95 dB.
  • Costa Rica, 93 dB.
  • Belgium, 91 dB.
  • Germany, 90 dB. 
Since the decibel level gets so high, hearing protection is particularly important at sporting events, the foundation noted in a news release. To prevent noise-induced hearing loss, Hear the World suggested using earplugs during games and giving the ears a break by stepping away during halftime, or muting or shutting off the radio or television if spectating from the couch.

Friday, June 20, 2014
 
The Hearing Journal received two 2014 APEX Awards for Publication Excellence: one in the category of Electronic Media–Apps for our iPad edition, and the other in Health & Medical Writing for our cover story about the proposed class action lawsuit against Walmart claiming unlicensed hearing aid sales in Texas. The cover story was written by Heather Lindsey, who regularly contributes to HJ.
 
The awards are given for “excellence in graphic design, editorial content, and the ability to achieve overall communications excellence.” This year, there were about 2,100 entries.
 
To download our iPad app, which includes exclusive podcasts and videos, as well as quick links to extra information, visit bit.ly/AppHearingJ. To read the article about the Walmart case, see page 8 of the August 2013 issue.

Monday, June 09, 2014
By Chuan-Ming Li, MD, PhD, & Howard J. Hoffman, MA
 
       
Dr. Li, left, is statistician (health/medicine) and Mr. Hoffman is director of the Epidemiology and Statistics Program, Division of Scientific Programs, National Institute on Deafness and Other Communication Disorders (NIDCD), National Institutes of Health (NIH). Dr. Li performs analyses for epidemiological studies and reviews concept proposals for NIDCD clinical trials.
 
Hearing loss is the third-leading cause of years lost due to disability worldwide. (The Global Burden of Disease: 2004 Update. Geneva, Switzerland: World Health Organization [WHO]; 2008.) An estimated 299 million men and 239 million women globally have “moderate or worse” hearing loss, Gretchen Stevens and colleagues reported on behalf of the 2010 Global Burden of Disease Hearing Loss Expert Group (Eur J Public Health 2013;23[1]:146-152).

Even more common, however, is depression. In the WHO report, unipolar depression occupied first place for years lost due to disability worldwide. The burden of depression is 50 percent higher for women than men.

We and our colleagues recently reported on the relationship between depression and hearing loss using the National Health and Nutrition Examination Survey (NHANES), 2005-2010, which includes a nationally representative sample of the civilian, noninstitutionalized population (JAMA Otolaryngol Head Neck Surg 2014;140[4]:293-302).
 
The prevalence of moderate-to-severe depression was significantly higher among adults age 18-69 who had self-reported hearing loss (11.4%) compared with those who reported good-to-excellent hearing (5.9%). The prevalence of depression rose as the degree of reported hearing loss increased from “a little trouble,” to “moderate trouble,” to “a lot of trouble” hearing, but not for individuals self-identified as deaf.

No relationship between depression and self-reported hearing loss was found among adults age 70 and older. In women 70 and older, there was a significant association between depression and an exam-based measure of moderate hearing loss (better ear [BE] pure-tone average [PTA] of 35-50 dB HL), but not in men of that age group.

 
Photo Credit: ©iStock/cruphoto
 

PARADOXICAL RESULTS
These paradoxical results may reflect the tendency of people in different age groups to assess their hearing loss in distinct ways.

For example, older adults may be less likely than younger adults to self-report hearing loss in relation to activity limitations. While men begin experiencing hearing loss in midlife, perhaps due to noise exposure, onset in women occurs 15 to 20 years later, typically around age 70.

Hearing loss is much more common than vision loss among older adults, as Vincent A. Campbell et al demonstrated (MMWR CDC Surveill Summ 1999;48[8]:131-156). Also, coping with hearing loss is different from dealing with other disabilities, since hearing loss is an invisible condition, frequently unrecognized by healthcare professionals.

The Seniors Research Group characterized the impact of hearing loss as often profound, with consequences for a person’s social, functional, and psychological well-being, as well as overall health, since hearing loss isolates people from friends and family because of a decreased ability to communicate. (The Consequences of Untreated Hearing Loss in Older Persons. Washington, DC: National Council on Aging; 1999.)

What can people with hearing loss do to avoid depression? We suggest they seek hearing healthcare and consider joining national organizations for people with hearing loss. When recommended, rehabilitation via hearing aids, alternative listening devices, etc. may assuage the difficult personal and social adjustments that attend hearing loss.
 
Health professionals can improve identification through regular hearing screening. The quality of life of people with hearing loss can be made better if doctors recognize the signs and symptoms of depression and refer patients for mental health services.

While treatment can help the majority of people with depressive illness, even those with the most severe depression, many people do not seek it. Effective treatments for depression include medication, psychotherapy, and other methods.

Although the mechanism connecting hearing loss with depression is unclear, the association between the two conditions suggests that treating people who have hearing loss at early stages may reduce their risk of developing depression.

Monday, June 09, 2014
By Nina Kraus, PhD, & Samira Anderson, AuD, PhD

Dr. Kraus is professor of auditory neuroscience at Northwestern University, investigating the neurobiology underlying speech and music perception and learning-associated brain plasticity.
 
Dr. Anderson is an alumna of Dr. Kraus’s Auditory Neuroscience Laboratory and assistant professor in the University of Maryland Department of Hearing & Speech Sciences, where she is studying the effects of hearing loss and aging on neural processing in older adults.
 
 
Most Americans are monolingual, but, with increased population diversity and international travel, more people are interested in the impact of bilingualism.

Since the bilingual brain develops the facility to switch from one language to another, executive function abilities that engage attention and inhibit irrelevant information are present to a greater degree in bilingual than monolingual people (Cerebrum 2012:13). These effects are most pronounced in childhood and older age.

Babies raised in bilingual homes more easily adapt to stimuli changes in an auditory learning task than babies raised in monolingual homes (Proc Natl Acad Sci U S A 2009;106[16]:6556-6560).

Bilingualism also seems to have a protective effect against age-related cognitive decline. In a group of patients with probable Alzheimer’s disease, bilingual patients were diagnosed 4.3 years later than monolingual patients, suggesting that bilingualism contributes to a cognitive reserve that can partially compensate for neural pathology (Neurology 2010;75[19]:1726-1729).

With the exception of sign language, all languages are auditory based. Therefore, the cognitive benefits that accompany bilingualism extend to the auditory system.
 
In previous issues, we have discussed the benefits of playing a musical instrument as a form of long-term training (Hear Res 2014;308:109-121). The use of multiple languages may also be viewed as long-term training.
 
KEEPING CONSISTENT
Jennifer Krizman and colleagues evaluated the effects of bilingualism on neural speech encoding in monolingual and bilingual adolescents who were matched based on IQ and socioeconomic status.

In the first study, the researchers evaluated auditory neural responses to the syllable /da/ presented in quiet and in six-talker babble noise (Proc Natl Acad Sci U S A 2012;109[20]:7877-7881).

Bilingual adolescents had stronger subcortical encoding of speech in noise than monolingual adolescents, with larger response amplitudes and greater representation of the fundamental frequency.

These effects were seen in both the quiet and noise conditions, but the differences were more pronounced in noise. The spectral amplitudes of the monolingual adolescents were markedly diminished by noise, whereas there were virtually no changes between conditions in the bilingual adolescents.

In a follow-up experiment, Krizman and colleagues found higher response consistency in brainstem and cortical responses among bilingual compared with monolingual adolescents (Brain Lang 2014;128[1]:34-40).
 
 
Adolescents raised in bilingual households had stronger subcortical responses to a speech syllable in the time (A) and frequency (B) domains compared with monolingual adolescents. In addition, the bilingual adolescents had greater response consistency for brainstem (C) and cortical (D) recordings than monolingual adolescents. Finally, brainstem response consistency was positively related to language proficiency only in the bilingual group (E). *p < 0.05, **p < 0.01, ***p < 0.001. (Adapted from Proc Natl Acad Sci U S A 2012;109[20]:7877-7881 and Brain Lang 2014;128[1]:34-40. Adaptations are themselves works protected by copyright. In order to publish this adaptation, authorization must be obtained both from the owner of the copyright in the original work and from the owner of copyright in the translation or adaptation.)
 

Bilingual people must exercise attentional control and inhibit one language when conversing in another language. A language-rich environment may increase attention to linguistic stimuli, and, with continued exposure, the auditory system may strengthen its automatic response to stimulus features.

Krizman et al’s work supported this idea, finding that sustained selective attention, as assessed by the Integrated Visual and Auditory Continuous Performance Test (IVA + Plus; braintrain.com), was positively related to the amplitude of the fundamental frequency in the noise condition, but only in bilingual participants (Proc Natl Acad Sci U S A 2012;109[20]:7877-7881).

Furthermore, brainstem response consistency was positively related to language proficiency and auditory attentional control (Brain Lang 2014;128[1]:34-40). Attentional control is regulated by the executive system in the prefrontal cortex.
 
The relationships among subcortical response consistency, language proficiency, and attentional control suggest that the efferent connections from the prefrontal cortex to the brainstem are strengthened by activation and suppression of different languages in the bilingual brain.

We know that nonnative English speakers have poorer performance on speech-in-noise tests than native English speakers, as Catherine L. Rogers and colleagues noted (Appl Psycholinguist 2006;27[3]:465-485). Why would that be the case if the neural encoding of speech is enhanced in bilingual people?

Rogers et al suggested that bilingual speakers need greater attentional resources to select a target word or phoneme because of the presence of competing languages. This allocation of resources to attention comes at the cost of the resources needed to accurately perceive the speech signal in noise.

A similar effect is seen when speech-in-noise recognition is compared in younger and older adults. In older adults, speech-in-noise performance declines relative to that of younger adults when the cognitive load increases, presumably because of limits on available resources (J Acoust Soc Am 1995;97[1]:593-608). Therefore, despite enhanced neural encoding, the increased cognitive load in bilingual people may reduce speech-in-noise performance.
 
For these reasons, we clinicians need to consider that aging and hearing loss may have a greater impact on speech-in-noise performance in patients who are bilingual.

Monday, June 09, 2014
By Ryan McCreery, PhD
 
Dr. McCreery is associate director of audiology and staff scientist at Boys Town National Research Hospital in Omaha, NE.
 

Hearing aids have grown increasingly sophisticated, offering signal processing designed to analyze the acoustics of a listening situation and adapt to meet the needs of the listener. This adaptation often happens automatically, without input from the listener, and can include changes to the frequency response, noise reduction, directionality, or listening program.

Automatic switching is marketed as improving ease of hearing aid use for adults, who do not have to change programs or adjust the hearing aid manually when the feature is enabled.
 
However, among professionals who serve children, the availability of automatic switching has created some debate over the potential benefits and limitations of the algorithms for pediatric hearing aid users.
 

Photo credit: © iStock/choja


On one hand, the ability to have the hearing aids adjust without input from the child or caregiver would seem to be greatly beneficial. Conversely, there are important reasons why this feature may not be appropriate for infants and young children.
 
 
The evidence is limited, even for adults.
Hearing aid features that automatically adjust to the listening environment have only recently become available, made possible by the development of digital signal processing in the devices and subsequent advancements over the last decade.

These changes occurred so rapidly, it’s often hard to remember that it wasn’t long ago that the only way to adjust hearing aid programming was to use a miniature screwdriver. Given this short time frame, there has been very little research to support the efficacy or effectiveness of the features, even for adults.

In addition, the specific parameters that activate changes in the hearing aids are often proprietary and difficult to measure with clinical verification techniques, contributing to the limited nature of the evidence base for making decisions about automatic features.
 
 
Audibility for speech must be maintained.
The purpose of fitting amplification for children is to make speech audible and provide these young patients with access to their auditory environment. Some automatic features alter not only the signal process of the hearing aid, but also the frequency response of the device.

Changes in the frequency response may modify audibility in unpredictable ways, increasing the risk for over-amplification or limited audibility. If clinicians wish to provide multiple programs that automatically switch based on listening environment, the audibility of each of those programs should be assessed using probe microphone verification to determine whether audibility is affected.

If the effect of the automatic feature on audibility cannot be determined from verification, that feature should not be implemented for children.
 
 
Consider the applicability of the processing to children.
Many automatic features have been developed based on the listening needs and environments of adults. Infants and young children may violate these assumptions in important ways.

For example, directional microphones that switch the microphone sensitivity based on the spatial location of speech and noise signals in the environment may operate under the assumption that the listener is stationary. While this assumption may be true for older school-age children, it would be difficult to characterize younger children as stationary.

How these differences between adults and children influence the effectiveness of automatic features has not been evaluated. The differences should be carefully considered prior to implementing automatic features for children.
 
 
Think about alternative strategies.
Many automatic hearing aid options are designed to improve comfort or ease of listening in background noise. Such options include automatic changes to digital noise reduction and directional microphone features used when the hearing aid detects noise in the environment.

In many listening situations, remote microphone hearing assistance technologies, such as frequency-modulation (FM) or digital-modulation systems, can provide a bigger improvement in comfort and ease of listening compared with other hearing aid features.

Remote microphone technology may not be optimal for every listening environment or situation, though. Clinicians should consider a range of options to combat the negative effects of background noise, including a combination of hearing aid features and hearing assistance technology.
 
 
Understand the manufacturer default settings for children.
Hearing aid manufacturers have gone to great lengths in recent years to create child-specific recommendations for their products’ features. These default settings often apply validated pediatric prescriptive formulae and conservative feature activation strategies based on the child’s age, which is entered into the programming software.

Many manufacturers have elected to limit the activation of automatic features in their pediatric default settings. However, some manufacturers maintain the availability of automatic learning features that increase hearing aid gain over time.

It is important to become familiar with what automatic features may be available in manufacturer pediatric default settings.
 
 
Some automatic features may be desirable for children.
Aspects of automatic hearing aid signal processing may present specific benefits for children who wear hearing aids.
 
For example, features that automatically switch the input of the hearing aid when an FM system is activated may be advantageous for children and their caregivers. In such cases, it is important for whoever is managing the hearing aids—the child, parent, caregiver, or teacher—to understand how the feature works.

Since teachers, in particular, may manage different amplification and hearing assistance technology systems, they may assume from previous experience that the FM signal has to be activated manually, highlighting the importance of specific instructions and training when new technology is provided to a child.
 
 
GREAT POTENTIAL, LIMITED EVIDENCE
Features that automatically switch in hearing aids have great potential to increase listening comfort and ease of use in a wide range of situations. However, at this time, there is limited evidence that these features are appropriate for children.

If these features are activated for children, care should be taken to ensure that speech audibility is maintained and reasonable alternatives that could provide the same benefits are not available.
Blogs Archive
FastLinks