A 63-year-old patient presents to a primary care office for difficulty understanding her grandchildren at home and friends in noisy restaurants. She is exhausted because she constantly has to concentrate to hear other individuals. She took a self-hearing test on her smartphone that indicated that she had “normal hearing.” She has an unremarkable ear exam. The primary care provider refers her for formal hearing testing to an audiology practice that confirms normal hearing thresholds and word understanding. The patient returns to the office to discuss additional testing and treatment options.
This story is common, and unfortunately, not harmless. About one in six American adults report some trouble with hearing.1 Untreated detectable hearing loss affects an estimated 34 million adults in the United States and contributes to over $22,000 in excess health care costs per individual over a 10-year period.2 These numbers do not account for those that may have hearing complaints despite “normal” testing. In addition to decrements in quality of life in individuals with hearing loss, an increasing number of studies indicates a potential association between hearing and cognitive function.3
Thankfully, over the past 10 years, there has been tremendous innovation in the hearing space. Researchers have identified novel mechanisms of hearing loss that have torn down century-old paradigms.4 Newly formed companies are racing to take advantage of newly identified therapeutic targets and drug delivery techniques. Recommendations by the U.S. FDA and acts by Congress have eliminated barriers to over-the-counter (OTC) hearing aids. Consumer electronic companies have also responded by creating affordable “hearable” technology that can detect harmful noise levels, as well as provide hearing amplification and noise reduction. In short, we are at a new dawn of understanding, preventing, and treating disorders of the auditory pathway.
For providers who evaluate individuals with hearing concerns, the scientific breakthroughs and market changes are occurring at a head-spinning rate. Clinical practice guidelines have not kept pace. How do you explain to our patient that despite a “normal” hearing test, she still has hearing loss? What rehabilitation option would you recommend? How do you help her navigate unregulated hearing products that are marketed directly to consumers?
DEVELOPMENT OF TESTING STANDARDS
The current definition of normal hearing and testing paradigms are potentially creating confusion, as well as inhibiting the prevention and treatment of hearing loss. The definition of normal hearing thresholds is not derived from a physiologically obvious cutoff. Analogous to the mythical values of 120/80 mm Hg for blood pressure and 126 mg/dL for diabetes, the determination of 25 dB for “normal hearing” matters because it directs how we advise and treat patients.
It is useful to understand the basis of the current hearing loss criteria to guide the next steps to address emerging hearing evaluation and therapeutics. Modern hearing testing arguably began in 1922 with the invention of the Western Electric 1-A audiometer, which was among the first accessible electronic devices that generated sounds at specific frequencies (commonly referred to as pure tones) and volumes. Before this innovation, clinicians used tuning forks or other devices placed at predefined distances from the patient to crudely assess hearing ability. Today, pure tone audiometry remains the foundation of the audiogram.
With the advent of the audiometer, a new problem arose: What are the thresholds for normal and abnormal hearing? Clinicians initially locally calibrated their audiometers. Due to the variability of results based on the testing location, the United States Public Health Service commissioned a large-scale hearing study in the 1930s to generate a unified national standard for audiometer calibration. The study consisted of two components: a hearing questionnaire and audiometric testing.5 The findings of the study became the basis of the first values of normal and abnormal hearing in 1951. Not surprisingly, controversy arose when 1952 British studies were different from the American counterparts. The discrepancy resulted from differences in methodology: the British studies provided detailed instructions to participants and used stricter exclusion. The methods from the 1952 British studies were adopted across the international community. By 1964, 12 studies from five countries using similar criteria were used to generate international standard hearing thresholds. These standards remain similar to the values used today.
LIMITATIONS OF AUDIOMETRY
While clinical hearing evaluation has progressed since the heyday of tuning forks, it has not undergone the same rapid innovation as our understanding of physiology and hearing therapeutics. This mismatch is likely contributing to frustration among patients and providers as diagnostics and treatment are opposite sides of the same coin. First, historic studies from decades ago used patient cohorts with questionable generalizability and survey validity. While some studies provided demographic information, it is not clear if vulnerable populations were included and how this may have influenced study findings. Second, pure tone audiometry typically assesses a limited range of frequencies (250 to 8,000 Hz). This is more than an octave short of the human range of hearing, which is considered to be upwards of 20 kHz. One can have an abnormal pure-tone hearing, yet still have normal hearing thresholds on a standard audiogram. Third, we know today that hearing loss is not a binary complaint. Subtle symptoms, such as tinnitus, hyperacusis, as well as difficulty with sound localization and difficulty hearing in noisy environments, may indicate auditory dysfunction prior to changes seen on an audiogram. Unfortunately, the testing options to quantify these complaints are not universally applied or accepted. Further, limitations of current testing also have implications for occupational noise exposure prevention and surveillance as someone may have normal audiometry despite a multiple of subjective auditory complaints indicating likely auditory injury. Finally, the longstanding approach of testing hearing in silent conditions underestimates real-world hearing. Similar to the difference between data from an electrocardiogram versus a cardiac stress test to detect coronary artery disease, we may be missing the diagnosis of hearing loss without ‘stressing’ the auditory system.
Several opportunities can improve our patients’ hearing health. First, it is necessary to be specific about the language of “normal.” Normal thresholds on an audiogram do not necessarily indicate normal hearing. Awareness of the drawbacks of testing is the first step toward better serving patients. Secondly, audiometric testing should strive to capture more dimensions of hearing loss, including higher-frequency thresholds, hearing in noisy conditions, and subjective patient-reported outcomes. Third, we need to stress the auditory system to stratify normally. There is a rich literature on these tests that may be adapted for standard clinical use. Speech understanding is routinely done today, but it is similarly generally done in quiet environments. Lastly, given the emergence of hearable technology and OTC hearing aids, clinicians should understand that patients traditionally labeled as having “normal hearing” may potentially benefit from sound amplification technology even if their audiometric results do not suggest significant impairment.
Ultimately, if a patient has subjective hearing loss and the tests do not indicate as much, the tests are likely at fault, not the patient. Management should be guided by the patient's lived experience of their hearing ability. While the audiogram will always have a role in standard hearing measurements, as we watch the emergence of scientific breakthroughs for hearing treatments, we should re-consider the “king” of current testing and labels of normal hearing.