People who are hard of hearing have long understood that their hearing impairment has broad consequences on their lives that go well beyond difficulty with inaudible sounds (Moore 1995; Dalton et al. 2003; Gatehouse & Noble 2004). Patients who are hard of hearing do not primarily seek hearing help because they have noticed poorer audibility of soft sounds, but instead they complain about an inability to function in complex everyday acoustical environments and demanding listening situations. They complain of poorer environmental awareness, inability to distinguish different talkers in group conversations, and increased listening effort and fatigue from extended communication interactions (Kramer et al. 2006; Lin & Ferrucci 2012; Hornsby 2013). These complaints are all indicative of the effects of hearing loss on high-order cognitive functioning, which manifest as increased cognitive effort. For the sake of this article, auditory cognitive effort (or listening effort) is defined as the increased allocation of cognitive resources to improve performance on listening tasks.
Today, our understanding of this phenomenon is that an increase in cognitive effort can result when there is a distortion introduced to the auditory signal at the auditory periphery due to hearing loss or even due to audio processing, such as poor telephone transmissions or disadvantageous hearing aid signal processing (see Lunner et al. 2016, this issue, pp. 145S–154S; Rudner 2016, this issue, pp. 69S–76S). The resulting distorted representation of incoming signals makes complex auditory tasks involving higher-order cognitive functioning (e.g., switching attention from one talker to another) more challenging because the saliency or rich quality of auditory information has been reduced. A visual analogy to the effects of reduced saliency in complex tasks is the increased effort and difficulty of putting a jigsaw puzzle together when the image on each jigsaw piece has been distorted and the shapes of the pieces altered. Sorting pieces into the correct regions of the picture, identifying pieces that go together based on their images, and matching the identified pieces to fit them together based on their shape all become more effortful for the puzzle solver. Not only does the puzzle-solving performance decline and/or slow down, but also the level of effort increases relative to what it would be in a situation where the person was working with undistorted pieces.
A logical extension of the fact that sensorineural damage in the auditory periphery can cause upstream effects on cognitive performance raises questions about how hearing devices might affect cognitive performance. Indeed, hearing aids can be considered to be an extension of the listener’s auditory periphery since they alter the transduction of acoustic sound from the environment to the ears of the listener (see Rudner 2016, this issue, pp. 69S–76S for additional discussion on this concept). Since hearing aids are designed to counteract the consequences of hearing loss, one could assume that hearing aids would also counteract the increases in cognitive effort often reported by people who are hard of hearing in challenging listening conditions when they are not aided.
Historically, however, most hearing aids have not been designed with consideration given to their possible effects on cognitive performance and until recently the effects of hearing aids on such phenomenon as cognitive effort have been unknown. The possibility exists that hearing aids could increase rather than decrease cognitive effort if the manipulation of sound by hearing aids negatively impacts auditory cues critical for complex processing by the auditory system (Lunner et al. 2009; Ng et al. 2014; Souza et al. 2015). The placement of microphones above the pinna or distortion of binaural cues from independent processing at the left and right hearing aids, for example, could reduce the saliency of spatial cues (Best et al. 2010) and make environmental awareness or speech understanding in the presence of multiple talkers more effortful.
BOTTOM-UP AND TOP-DOWN EFFECTS
Research over the past decade has dramatically increased our understanding of the interaction between cognition and both hearing loss and the use of hearing technologies. Both top-down and bottom-up interactions between peripheral auditory function and cognition have been identified in numerous experiments, and cognitive theories have been applied in an attempt to try to explain sensory-cognitive interactions during speech understanding (for an early and very simple model, see Schneider & Pichora-Fuller 2000; see also Wingfield 2016, this issue, pp. 35S–43S).
The effects of hearing ability on cognitive functioning have been well demonstrated in studies where hearing ability has been altered by manipulating the level of presentation of speech (Baldwin & Ash 2011), by varying the signal-to-noise ratio (Pichora-Fuller et al. 1995), by studying people with varying degrees of hearing loss (McCoy et al. 2005), or by the use of different types of hearing aid signal processing. Hearing aid technology such as noise reduction and directional signal-to-noise ratio improvements have been found to improve the reaction time in a dual-task paradigm, a result which is assumed to reflect a reduction in cognitive load, with the potential benefits of improved comprehension, memory access and storage of heard information, and other benefits resulting from increased working memory capacity (Sarampalis et al. 2009; Ng et al. 2013; Desjardins & Doherty 2014; Ng et al. 2014). The provision of hearing aid technology has also been shown to reduce the listening fatigue as measured by reaction time, presumably because the use of amplification-reduced listening effort (Hornsby 2013).
The effects of cognitive abilities on auditory functioning have also been demonstrated in several ways. The ability of a hearing aid wearer to benefit from fast-acting compression in hearing aids appears to be modulated by cognitive ability insofar as listeners with better performance on cognitive measures benefit more from fast-acting compression than from slow-acting compression, while listeners with lower performance benefit more from slow-acting compression than from fast-acting compression (Lunner & Sundewall-Thorén 2007; Souza & Sirow 2014). Hearing aid wearers with higher cognitive functioning also appear to have a greater awareness of hearing aid benefit compared with those with lower cognitive functioning (Lunner 2003). Combinations of signal processing algorithms working together also yield greater improvements in word recognition by those with higher working memory capacity (Souza et al. 2015).
There are many models that have been designed to explain auditory functioning (Zhang et al. 2001; Jepsen et al. 2008), but few that have relevance to the concept of listening effort or the relationship between signal fidelity at the auditory periphery and higher levels of auditory-cognitive functioning. As described by Wingfield (2016, this issue, pp. 35S–43S), over several decades, perceptual effort has been related to cognitive models of memory and attention for people with normal sensory abilities. One model that has attempted to adapt key elements of such cognitive models to include the effects of impaired hearing on speech understanding is the Ease of Language Understanding (ELU) model shown in Figure 1. Notably, the ELU model has been developed and applied to explain the effects of distortion in the auditory periphery on spoken language comprehension, including processing routes that may correspond to more or less listening ease or effort if effort is taken to be the opposite of ease (Rönnberg et al. 2008, 2013). In the ELU model, the quality of the match between the auditory representation of the currently heard speech signal and the long-term stored representation of speech modulates the level of cognitive effort necessary for comprehension: poorer speech signal quality—due to noise, hearing loss, or hearing aid distortion—may induce greater effort to match that impoverished speech phoneme or word to a known representation of that phoneme or word. This model does not address, however, all aspects of how the acoustic characteristics of hearing aids and/or the auditory characteristics of hearing loss may result in increased cognitive effort or the conditions under which an individual expends greater cognitive effort when listening in real-world situations. These matters will be discussed in detail below.
Given the current state of our understanding of the functional links between the auditory and cognitive processing, the following discussion will identify how future research might advance our understanding of these functional links, with a particular focus on the relation between cognitive processing during speech understanding and possible characteristics of hearing technologies. Areas of focus will be the following:
- An expansion of the scope of models such as the ELU to include additional cognitive factors that interact with auditory factors during speech understanding.
- An extension of models such as the ELU to include nonspeech signals.
- The relevance of laboratory measures relating the interactions between auditory processing and cognitive processing to listening in the real world.
- The application of cognitive measures in rehabilitative audiology assessment and their use in regard to the provision of hearing aids and the evaluation of treatment outcomes.
A HYBRID AUDITORY SCENE ANALYSIS AND ELU MODEL
Auditory-cognitive processing involves many complex auditory processes such as auditory spatial awareness, auditory scene analysis (ASA) (including the identification of auditory objects and events that may depend on the segregation of temporally overlapping streams or the integration across temporally sequential streams), the allocation of cognitive resources to meet listening demands (i.e., the expenditure of listening effort), auditory selective attention or focus, and attention switching. Some aspects of cognition, such as auditory memory, may happen after the auditory scene is created and attention has been focused on a target sound source (see Wingfield 2016, this issue, pp. 35S–43S for a discussion of the time course of auditory memory).
Importantly, both hearing loss and hearing aid technology can affect the representations of auditory scenes (Edwards 2007) and working memory. Auditory scene representations can be affected by the alteration of auditory features used to group components of sound together into separate auditory sources, resulting in less salient auditory representations. This degradation of auditory source saliency results in a poorer match to the internal representation, resulting in an increased allocation of cognitive resources and greater cognitive effort. Hence, the effect of hearing loss and hearing aid technology on cognitive functioning, as described in models such the ELU that are framed in terms of working memory, may be mediated by the intermediate effects of hearing loss and technology on the auditory scene representation. This seems to be a special and important type of matching between the auditory analysis of the external signal and an internal representation of speech when the listener is interpreting the speech in the complex context of a real-world scene. Thus, to account for everyday listening performance and the phenomenon of listening effort, it would be advantageous to create a hybrid model incorporating ASA and attention with the speech understanding components of a model such as the ELU model.
THE IMPORTANCE OF ASA
ASA refers to the organization of auditory signal components into perceptually meaningful objects. ASA plays an important role in the perception of complex sound environments, including speech understanding in those environments (Bregman 1990). Briefly, the acoustic signals from multiple sound sources in an auditory scene arrive at the two ears of the listener. For concurrently occurring sounds, these signals often overlap in both time and frequency, making the extraction of any individual sound source difficult. ASA involves central auditory processing of binaural cues (e.g., interaural time and intensity difference cues) as well as temporal cues (onset, offset, and duration) and pitch cues (fundamental frequency and harmonic structure). On the basis of the auditory analysis of these cues, a listener constructs a representation of an auditory scene corresponding to the cues from the acoustic environment. In this scene, the listener groups the sound components shared by an auditory object or stream (e.g., one talker’s voice) and segregates components from distinct objects or streams (e.g., the voices of multiple simultaneous talkers). More specifically, auditory objects are formed by preattentively extracting features from the auditory signal and grouping components together that have similar features: fundamental frequency, amplitude modulation, frequency modulation, interaural level timing differences, and interaural level differences. The listener may then focus their attention on a selected auditory object and ignore other auditory objects.
To understand the speech of a target talker in a complex environment such as a crowded restaurant, the listener must first extract the target speech from the interfering sounds. This process of extraction is ASA and selective focus. Many cognitive models based on theories of working memory, such as the ELU model, may account for the effortful allocation of resources to understanding the talker’s speech once his or her voice stream has been extracted from the ambient scene and attention has been focused upon the extracted speech signal, but such models typically overlook the possible importance of how much and in what ways impaired peripheral and central auditory processing contribute to the poorer representation of speech in memory. Nevertheless, as reviewed by Wingfield (2016, this issue, pp. 35S–43S), some of the most important early research that influenced the development of cognitive models related to auditory processing began with studies of listening in complex conditions that required ASA; that is, the seminal work of Broadbent (1958) in the 1950s concerning “the cocktail party effect.” Early auditory researchers also anticipated the future need to incorporate cognition into models of hearing (e.g., Davis 1964).
Over the decades, auditory neuroscientists (e.g., Frisina et al. 2001) and cognitive psychologists (e.g., Wingfield & Tun 2007) have continued to recognize the need to combine auditory and cognitive components into one more complete model. Indeed, the model proposed by Wingfield and Tun (2007) even includes a stage dedicated to extracting a target signal from competing sound and allocating attention to it before linguistic processing of the speech. It seems that such a comprehensive model is still needed if we are to make progress in understanding how auditory-cognitive interactions during listening are altered (or not) by hearing loss and by the use of hearing aids or the provision of auditory training or other rehabilitative interventions (see also Pichora-Fuller & Singh 2006).
Many aspects of ASA are “preattentive,” meaning that they happen automatically without being under cognitive control. Shamma et al. (2011), however, hypothesized that while feature streams are made available automatically, the combining of those streams into auditory objects is mediated by attention to create a segregated sound source. This is supported by psychoacoustic research on auditory streaming and informational masking (Carlyon et al. 2001,2003; Carlile & Corkhill 2015) and by physiological research concerning object formation (Alain et al. 2001; Snyder et al. 2006). Additionally, the intentional application of additional cognitive effort to extract speech in a difficult listening situation suggests that attentional focus is also relevant to the experience of listening effort reported by people who are hard of hearing. Thus, it seems that variations in the quality of the input signal, preattentive, and attentive processing of auditory inputs and the allocation of working memory resources must be considered together if we are to better understand how perceptual and cognitive effort are deployed during speech understanding by people who are hard of hearing with or without hearing aids. These relationships are made explicit in the framework described in this issue’s consensus article (Pichora-Fuller et al. 2016, this issue, pp. 5S–27S).
TYING IT ALL TOGETHER
Figure 2 shows a hybrid ASA and ELU model to include the above factors (note that this hybrid uses the earlier Rönnberg et al.’s 2008 model rather than the newer Rönnberg et al.’s 2013 model since the former is a simpler representation to integrate with and captures the relevant aspects of the ELU model for this discussion). Importantly, the ASA & Attention module with “Feature Extraction, Object Formation, and Selective Attention” is added to the ELU model to include critical higher level auditory processing before the implicit cognitive processing of speech. This is similar to the model suggested by Stenfelt and Rönnberg (2009) that laid out the connection between the auditory periphery and the ELU model, except that in the current hybrid ASA and ELU model object formation and source segregation are directly combined with the ELU model. A second important addition to the ELU model is the inclusion of the pathway from the Explicit Processing component to the new ASA & Attention component, indicating the optional control of source streaming and attention by Explicit Processing.
To understand how this hybrid ASA and ELU model better represents the effect of the auditory peripheral and central processing on the need for cognitive effort, consider the situation where a person is trying to listen monaurally to a talker in the presence of several talkers and other nearby sound sources, a situation commonly experienced by someone with single-sided deafness. In this scenario, binaural spatial cues are absent such that there is a reduction in the acoustic cues and features available for auditory object formation and source segregation, thereby resulting in a poorer representation of the target speech. To successfully understand the speech given this poorer representation, the listener could increase listening effort by focusing attention on the available cues that are available monaurally (e.g., temporal onset cues or pitch cues based on fundamental frequency and harmonic structure) to differentiate the target talker’s voice from the voices of competing talkers. Meanwhile, the poorer auditory representation of the target speech produces a poorer match to the stored representation of speech and the listener must then also allocate more cognitive resources to interpret the speech that has been extracted.
The incorporation of ASA components into a hybrid with the ELU model moves us toward a more comprehensive model of speech understanding. Nevertheless, although speech understanding is extremely important, listeners’ daily auditory functioning consists of more than speech understanding (Noble 1983). Both environmental awareness (Karlsson Espmark & Hansson Scherman 2003; Brungart et al. 2014) and music listening (Leek et al. 2008; Schulkin & Raglan 2014) are other important aspects of people’s lives, and the hybrid model proposed above for speech understanding would ideally apply to the interpretation of nonspeech signals as well. People who are hard of hearing suffer from reduced awareness of environmental sounds. A typical example of this is a lack of awareness of someone walking up behind them even though the sounds of the footsteps were audible. One account of such lack of awareness is that hearing loss undermines ASA, producing a less salient auditory scene with poorer representations of auditory objects. As with speech, this poorer representation of environmental sounds would require additional explicit processing of sound rather than simply implicit processing to maintain a level of awareness of the sounds around the listener. This account is consistent with the frequent complaints of people who are hard of hearing about increased fatigue resulting from poorer environmental awareness. In the ELU model, the need for explicit processing could result from a poorer match of environmental sounds to templates of sound representations in memory. The cocktail party effect entails the allocation of partial attention to background sounds while attending to foreground sounds and the ability to switch attention when the background sound gains importance to the listener (see discussion by Wingfield 2016, this issue, pp. 35S–43S). The extra effort that people who are hard of hearing must expend to maintain environmental awareness may be somewhat similar to the experience of normally hearing listeners at a cocktail party when they are monitoring poorly represented background sounds with extra attentional effort being expended to switch attention to bring a sound in the background into attentional focus in the foreground. Within the framework described in the consensus article (Pichora-Fuller et al. 2016, this issue, pp. 5S–27S), this would be explained by the Intentional Attention affecting the Allocation Policy to allocate more resources to the auditory task. To fully validate this concept, research using nonspeech signals is needed to provide evidence that the hybrid model could be generalized from the allocation of cognitive resources for speech understanding to the interpretation of the meaning of environmental sounds.
Likewise, a hybrid ASA and ELU model could be applied to listening to music. A common complaint of people who are hard of hearing about music listening is that instruments are muddled or smeared (Gfeller & Knutson 2003; Wessel et al. 2007; Einhorn 2012; Kirchberger & Russo 2015) and that they have a poorer ability to extract and follow individual instruments in an ensemble (Kirchberger & Russo 2015). Within the hybrid ASA and ELU model in Figure 1, poorer representation of segregation cues such as pitch and modulation due to hearing loss or distortion from hearing aids could produce a poorer representation of musical instruments as auditory objects and a poorer match to the internal representation of instrument sounds, resulting in the allocation of more cognitive resources to segregate and attend to individual instruments. The representation of the musical instrument is akin to representation of the voice of a talker; in both cases, the representation of the sound source may be distinct from the representation of the song or word produced by the source (see Wingfield 2016, this issue, for a discussion of acoustic or phonological versus semantic representations). In this way, the hybrid ASA and ELU model could account for the increased effort required for awareness and interpretation of all sounds as a result of degraded signal representation at the auditory periphery, not just for speech.
MEASURING BENEFIT FROM HEARING AIDS
As our awareness of the effects of hearing aid technology on auditory-cognitive functioning evolves, outcome measures to evaluate benefit from the use of technology other than the traditional measures of the accuracy of speech understanding or the rating of sound quality are needed. Referring to the hybrid ASA and ELU model, speech understanding tests can be used to measure the effects of hearing technology on the output of the model and tests of listening effort can be used to measure the effects of hearing technology on the allocation of working memory resources, but tests are needed to measure the effects of technology on ASA and attention. Outcome measures that more directly test complex auditory processing will help us to understand the effects of hearing loss and the use of hearing aid technology on auditory-cognitive functioning and help to explain why different technology affects not only speech understanding and listening effort but also nonspeech performance such as environmental awareness. In effect, such tests would enable practical connections to be made between the auditory processing of specific cues and the resulting quality of acoustic or phonological representations involved in the auditory-cognitive processing of a variety of real-world sounds. That is, the demand for listening effort to interpret the meaning of auditory objects in complex, real-world auditory scenes may depend in important ways on the quality of source-dependent auditory processing; for example, specific auditory deficits may undermine the perception of some cues more than others, and specific technologies may enhance some cues more than others. Progress will depend on discovering how exactly the quality of auditory processing affects the quality of representations and the effort required to use those representations to succeed in performing complex tasks.
Spatial hearing is a notable example of an aspect of auditory processing in real-world situations that warrants further investigation. The use of auditory cues to represent the spatial location of sound sources and how spatial location information is used in performing auditory tasks. Such tasks include locating sound sources, using spatial separation of sound sources to improve speech understanding (spatial release from masking; Freyman et al. 2001), focusing attention on a sound source at a specific spatial location, and general awareness of auditory objects in the environment. Hearing aid technology could help or hurt these tasks depending on how they transform and deliver spatial acoustic cues to the hearing aid wearer. For example, a recent study demonstrates with a dual-task paradigm that spatial cues can improve performance in a secondary visual task, suggesting that listening effort was reduced in the primary speech task even though these same spatial cues had no effect on the accuracy of word recognition (Xia et al. 2015). In an additional example of measuring auditory spatial ability, Xia et al. (2014) measured the speed with which a listener can switch spatial auditory focus with and without pitch cues being available. They found that switching speed increased with the addition of pitch cues, suggesting that increasing auditory cues improves the quality of ASA, creating more salient auditory objects and reducing the effort to focus auditory attention on an auditory target. These results suggests that accurate auditory spatial representation of auditory objects in auditory scenes is important for reducing cognitive effort when a listener attends to a target talker or switches attention from one source to another. Whether a specific hearing aid technology improves or degrades specific auditory spatial cues may turn out to be critical to ASA ability and the subsequent demands for increased allocation of cognitive resources.
Laboratory measures of benefit from hearing aids are typically designed to maximize the effect under investigation through the use of specific sound signals, acoustic listening conditions, and ideal listener qualifications. These ideal laboratory experiments are called measures of efficacy, and they are often best-case situations for maximizing the likelihood of producing a result of statistical significance. A typical example of this would be the use of the S-Test (Robinson et al. 2007) to measure the benefit of frequency-lowering technology with listeners who have steeply-sloping losses: both the test and the hearing profile of test subjects are selected to maximize the difference between hearing aid benefit with and without frequency lowering. In contrast to measures of efficacy, measures of hearing aid effectiveness are conducted in conditions of real-world use, with typical hearing aid wearers, and using typical fitting procedures that would be used in a clinical setting. In general, measures of effectiveness of medical treatments are not as robust as measures of efficacy.
While measures of efficacy are typically stronger than measures of effectiveness, effectiveness is more relevant for the patient receiving a treatment. For example, while a demonstration that hearing aid technology improves speech reception thresholds in a laboratory setting, this improvement will have less value to the hearing aid wearer if they do not notice an improvement in their understanding of speech in their normal everyday life situations. Similarly, while efficacy measures of cognitive benefit from hearing aid technology in the laboratory provide valuable insight into the effects of hearing aid technology, effectiveness measures are lacking. These real-world measures could be subjective questionnaires although evidence suggests that there is little correlation between self-assessment and objective measures of listening effort (Mackersie & Cones 2011; McGarrigle et al. 2014). No validation exists showing meaningful benefits from hearing aids in terms of reduced listening effort for hearing aid wearers in everyday real-world situations.
While there is laboratory evidence that cognitive ability can affect hearing aid outcomes, a patient’s cognitive ability is typically not measured in the clinic as part of a hearing aid dispensing protocol. The finding that the ability to notice benefit from hearing aid features is correlated with working memory capacity (Lunner 2003) suggests that measures of a patient’s working memory ability could be used to individualize counseling regarding expectations about and the selection of hearing aid features. Similarly, cognitive spare capacity is correlated with the optimal speed of multiband compression for an individual (Lunner & Sundewall-Thoren 2007; Souza & Sirow 2014) and, more generally, benefit received from hearing aid features is correlated with measures of working memory (Lunner et al. 2009; Rönnberg et al. 2013). Given the relationship between cognitive ability and hearing aid benefit, a need exists for the development of guidelines for how to use measures of a patient’s cognitive ability in the selection and fitting of hearing aid technology to the individual. Validation of the effectiveness of such an application of cognitive measures to hearing aid fittings must also be developed to convince the practitioner of the value of spending more time and effort to obtain cognitive measures. Finally, measuring cognitive ability, including measures of working memory, do not tell the full story of when and why a listener allocates cognitive resources (Pichora-Fuller et al. 1998); refer to the consensus article (Pichora-Fuller et al. 2016, this issue, pp. 5S–27S) for a more general framework on cognitive resource allocation.
Continued evidence of the link between hearing and cognition requires a broader understanding of how distortion to the representation of sound resulting from peripheral and central auditory processing affects higher level auditory-cognitive functioning. A hybrid model was proposed that incorporates ASA into the ELU model that could provide a broader view of how distortion or reduction of the quality of the auditory input signal can affect cognitive effort. The hybrid ASA and ELU model can also account for how auditory factors contribute to listening effort for nonspeech sounds, including awareness of environmental sounds and enjoyment of music.
The investigation of the interaction of the quality of auditory representations to cognitive functioning has largely been limited to the laboratory under ideal conditions for maximizing effects. For measures of cognition to be adopted in clinical practice, it will be necessary to develop measures of cognitive benefit that could be used to gauge the effectiveness of hearing aid technology under normal clinical conditions, with typical hearing aid wearers and with generalization to functioning in real-world situations. Additionally, it would be necessary to validate the effectiveness of the application of cognitive measures, such as measures of working memory, in hearing aid fitting protocols. Until the value of technology to cognitive effectiveness has been demonstrated, and the benefit of applying cognitive measures to hearing aid fittings is validated, the inclusion of cognitive measures in clinical hearing aid fitting protocols seems unlikely.
The author would like to thank Kathy Pichora-Fuller, Mitch Sommers, and Graham Naylor for their insightful comments and edits.
Alain C., Arnott S. R., Picton T. W. Bottom-up and top-down influences on auditory scene analysis: Evidence from event-related brain potentials. J Exp Psychol Hum Percept Perform, (2001). 27, 10721089.
Baldwin C. L., Ash I. K. Impact of sensory acuity on auditory working memory span in young and older adults. Psychol Aging, (2011). 26, 8591.
Best V., Kalluri S., McLachlan S., et al. A comparison of CIC and BTE hearing aids for three-dimensional localization of speech. Int J Audiol, (2010). 49, 723732.
Bregman A. S. Auditory Scene Analysis. (1990). Cambridge, MA: MIT Press.
Broadbent D. E. Perception and Communication. (1958). London, United Kingdom: Pergamon Press.
Brungart D. S., Cohen J., Cord M., et al. Assessment of auditory spatial awareness in complex listening environments. J Acoust Soc Am, (2014). 136, 18081820.
Carlile S., Corkhill C. Selective spatial attention modulates bottom-up informational masking of speech. Sci Rep, (2015). 5, 8662.
Carlyon R. P., Cusack R., Foxton J. M., et al. Effects of attention and unilateral neglect on auditory stream segregation. J Exp Psychol Hum Percept Perform, (2001). 27, 115127.
Carlyon R. P., Plack C. J., Fantini D. A., et al. Cross-modal and non-sensory influences on auditory streaming. Perception, (2003). 32, 13931402.
Dalton D. S., Cruickshanks K. J., Klein B. E., et al. The impact of hearing loss on quality of life in older adults. Gerontologist, (2003). 43, 661668.
Davis H. Physiological and psychological functions in relation to anatomy and physiology. Int Audiol, (1964). 3, 209215.
Desjardins J. L., Doherty K. A. The effect of hearing aid noise reduction on listening effort
in hearing-impaired adults. Ear Hear, (2014). 35, 600610.
Edwards B. The future of hearing aid technology. Trends Amplif, (2007). 11, 3145.
Einhorn R. Observations from a musician with hearing loss. Trends Amplif, (2012). 16, 179182.
Freyman R. L., Balakrishnan U., Helfer K. S. Spatial release from informational masking in speech recognition. J Acoust Soc Am, (2001). 109(5 Pt 1)21122122.
Frisina D. R., Frisina R. D., Snell K. B., et al. P. R. Hof, C. V. Mobbs, Auditory temporal processing during aging. Functional Neurobiology of Aging (2001). New York, NY: Academic Press.pp. 565579.
Gatehouse S., Noble W. The Speech, Spatial and Qualities of Hearing Scale (SSQ). Int J Audiol, (2004). 43, 8599.
Gfeller K., Knutson J. F. Music to the impaired or implanted ear. ASHA Lead, (2003). 8, 115.
Hornsby B. W. The effects of hearing aid use on listening effort
and mental fatigue associated with sustained speech processing demands. Ear Hear, (2013). 34, 523534.
Jepsen M. L., Ewert S. D., Dau T. A computational model of human auditory signal processing and perception. J Acoust Soc Am, (2008). 124, 422438.
Karlsson Espmark A. K., Hansson Scherman M. Hearing confirms existence and identity—Experiences from persons with presbyacusis. Int J Audiol, (2003). 42, 106115.
Kirchberger M. J., Russo F. A. Development of the adaptive music perception test. Ear Hear, (2015). 36, 217228.
Kramer S. E., Kapteyn T. S., Houtgast T. Occupational performance: Comparing normally-hearing and hearing-impaired employees using the Amsterdam Checklist for Hearing and Work. Int J Audiol, (2006). 45, 503512.
Leek M. R., Molis M. R., Kubli L. R., et al. Enjoyment of music by elderly hearing-impaired listeners. J Am Acad Audiol, (2008). 19, 519526.
Lin F. R., Ferrucci L. Hearing loss and falls among older adults in the United States. Arch Intern Med, (2012). 172, 369371.
Lunner T. Cognitive function in relation to hearing aid use. Int J Audiol, (2003). 42(Suppl 1)S49S58.
Lunner T., Rudner M., Rönnberg J. Cognition and hearing aids. Scand J Psychol, (2009). 50, 395403.
Lunner T., Rudner M., Rosenbom T., et al. Using speech recall in hearing aid fitting and outcome evaluation under ecological test conditions. Ear Hear, (2016). 37, 145S154S.
Lunner T., Sundewall-Thorén E. Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. J Am Acad Audiol, (2007). 18, 604617.
Mackersie C. L., Cones H. Subjective and psychophysiological indexes of listening effort
in a competing-talker task. J Am Acad Audiol, (2011). 22, 113122.
McCoy S. L., Tun P. A., Cox L. C., et al. Hearing loss and perceptual effort: Downstream effects on older adults’ memory for speech. Q J Exp Psychol A, (2005). 58, 2233.
McGarrigle R., Munro K. J., Dawes P., et al. Listening effort
and fatigue: What exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group “white paper.” Int J Audiol, (2014). 53, 433440.
Moore B. C. J. Perceptual Consequences of Cochlear Damage. (1995). New York, NY: Oxford University Press.
Ng E. H., Classon E., Larsby B., et al. Dynamic relation between working memory capacity and speech recognition in noise during the first 6 months of hearing aid use. Trends Hear, (2014). 18, 110.
Ng E. H., Rudner M., Lunner T., et al. Effects of noise and working memory capacity on memory processing of speech for hearing-aid users. Int J Audiol, (2013). 52, 433441.
Noble W. Hearing, hearing impairment, and the audible world: A theoretical essay. Audiology, (1983). 22, 325338.
Pichora-Fuller M. K., Johnson C. E., Roodenburg K. E. J. The discrepancy between hearing impairment and handicap in the elderly: Balancing transaction and interaction in conversation. J Applied Comm Res, (1998). 26, 99119.
Pichora-Fuller M. K., Kramer S. E., Mark E. Hearing impairment and cognitive energy: A framework for understanding effortful listening (FUEL). Ear Hear, (2016). 37, 5SS27.
Pichora-Fuller M. K., Schneider B. A., Daneman M. How young and old adults listen to and remember speech in noise. J Acoust Soc Am, (1995). 97, 593608.
Pichora-Fuller M. K., Singh G. Effects of age on auditory and cognitive processing: Implications for hearing aid fitting and audiologic rehabilitation. Trends Amplif, (2006). 10, 2959.
Robinson J. D., Baer T., Moore B. C. Using transposition to improve consonant discrimination and detection for listeners with severe high-frequency hearing loss. Int J Audiol, (2007). 46, 293308.
Rönnberg J., Lunner T., Zekveld A., et al. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Front Syst Neurosci, (2013). 7, 31.
Rönnberg J., Rudner M., Foo C., et al. Cognition counts: A working memory system for ease of language understanding (ELU). Int J Audiol, (2008). 47(Suppl 2)S99S105.
Rudner M. Cognitive spare capacity as an index of listening effort
. Ear Hear, (2016). 37, 69S76S.
Sarampalis A., Kalluri S., Edwards B., et al. Objective measures of listening effort
: Effects of background noise and noise reduction. J Speech Lang Hear Res, (2009). 52, 12301240.
Schneider B. A., Pichora-Fuller M. K. F. I. M. C. T. A. Salthouse, Implications of perceptual processing for cognitive aging research. The Handbook of Aging and Cognition (2000). 2nd ed., New York, NY: Laurence Erlbaum Associates.pp. 155219.
Schulkin J., Raglan G. B. The evolution of music and human social capability. Front Neurosci, (2014). 8, 292.
Shamma S. A., Elhilali M., Micheyl C. Temporal coherence and attention in auditory scene analysis. Trends Neurosci, (2011). 34, 114123.
Snyder J. S., Alain C., Picton T. W. Effects of attention on neuroelectric correlates of auditory stream segregation. J Cogn Neurosci, (2006). 18, 113.
Souza P., Arehart K. H., Shen J., et al. Working memory and intelligibility of hearing-aid processed speech. Front Psychol, (2015). 6.
Souza P. E., Sirow L. Relating working memory to compression parameters in clinically fit hearing AIDS. Am J Audiol, (2014). 23, 394401.
Stenfelt S., Rönnberg J. The signal-cognition interface: Interactions between degraded auditory signals and cognitive processes. Scand J Psychol, (2009). 50, 385393.
Wessel D., Fitz K., Battenberg E., et al. Optimizing hearing aids for music listening. 19th International Congress on Acoustics.(2007).
Wingfield A. The evolution of models of working memory and cognitive resources. Ear Hear, (2016). 37, 35S43S.
Wingfield A., Tun P. A. Cognitive supports and cognitive constraints on comprehension of spoken language. J Am Acad Audiol, (2007). 18, 548558.
Xia J., Kalluri S., Edwards B., et al. Cognitive effort and listening in everyday life. ENT Audiol News, (2014). 23, 8889.
Xia J., Nooraei N., Kalluri S., et al. Spatial release of cognitive load measured in a dual-task paradigm in normal-hearing and hearing-impaired listeners. J Acoust Soc Am, (2015). 137, 18881898.
Zhang X., Heinz M. G., Bruce I. C., et al. A phenomenological model for the responses of auditory-nerve fibers: I. Nonlinear tuning with compression and suppression. J Acoust Soc Am, (2001). 109, 648670.