Recent studies in animals indicate that even moderate levels of exposure to noise can damage synaptic ribbons between the inner hair cells and auditory nerve fibers without affecting audiometric thresholds, giving rise to the use of the term “hidden hearing loss” (HHL). Despite evidence across several animal species, there is little consistent evidence for HHL in humans. The aim of the study is to evaluate potential electrophysiological changes specific to individuals at risk for HHL.
Participants forming the high-risk experimental group consisted of 28 young normal-hearing adults who participated in marching band for at least 5 years. Twenty-eight age-matched normal-hearing adults who were not part of the marching band and had little or no history of recreational or occupational exposure to loud sounds formed the low-risk control group. Measurements included pure tone audiometry of conventional and high frequencies, distortion product otoacoustic emissions, and electrophysiological measures of auditory nerve and brainstem function as reflected in the click-evoked auditory brainstem response (ABR). In experiment 1, ABRs were recorded in a quiet background across stimulus levels (30–90 dB nHL) presented in 10 dB steps. In experiment 2, the ABR was elicited by a 70 dB nHL click stimulus presented in a quiet background, and in the presence of simultaneous ipsilateral continuous broadband noise presented at 50, 60, and 70 dB SPL using an insert earphone (Etymotic, ER2).
There were no differences between the low- and high-risk groups in audiometric thresholds or distortion product otoacoustic emission amplitude. Experiment 1 demonstrated smaller wave-I amplitudes at moderate and high sound levels for high-risk compared to low-risk group with similar wave III and wave V amplitude. Enhanced amplitude ratio V/I, particularly at moderate sound level (60 dB nHL), suggesting central compensation for reduced input from the periphery for high-risk group. The results of experiment 2 show that the decrease in wave I amplitude with increasing background noise level was relatively smaller for the high-risk compared to the low-risk group. However, wave V amplitude reduction was essentially similar for both groups. These results suggest that masking induced wave I amplitude reduction is smaller in individuals at high risk for cochlear synaptopathy. Unlike previous studies, we did not observe a difference in the noise-induced wave V latency shift between low- and high-risk groups.
Results of experiment 1 are consistent with findings in both animal studies (that suggest cochlear synaptopathy involving selective damage of low-spontaneous rate and medium-spontaneous rate fibers), and in several human studies that show changes in a range of ABR metrics that suggest the presence of cochlear synaptopathy. However, without postmortem examination by harvesting human temporal bone (the gold standard for identifying synaptopathy) with different noise exposure background, no direct inferences can be derived for the presence/extent of cochlear synaptopathy in high-risk group with high sound over-exposure history. Results of experiment 2 demonstrate that to the extent response amplitude reflects both the number of neural elements responding and the neural synchrony of the responding elements, the relatively smaller change in response amplitude for the high-risk group would suggest a reduced susceptibility to masking. One plausible mechanism would be that suppressive effects that kick in at moderate to high levels are different in these two groups, particularly at moderate levels of the masking noise. Altogether, a larger scale dataset with different noise exposure background, longitudinal measurements (changes due to recreational over-exposure by studying middle-school to high-school students enrolled in marching band) with an array of behavioral and electrophysiological tests are needed to understand the complex pathogenesis of sound over-exposure damage in normal-hearing individuals.