Journal Logo

Behavioral approaches: cognition and listening

Cognitive Load and Listening Effort: Concepts and Age-Related Considerations

Lemke, Ulrike; Besser, Jana

Author Information
doi: 10.1097/AUD.0000000000000304
  • Free

Abstract

INTRODUCTION

Following the insight that auditory and cognitive functions are interdependent during the comprehension of spoken language and in many other listening tasks (Cherry 1953 for an early example), the field of cognitive hearing science developed over the past decades (Arlinger et al. 2009). In this context, it was recognized that listening can be effortful in situations that require the intensive use of cognitive processing resources. While the term listening effort has been used in hearing research since the early 1980s (Downs 1982), the topic has received increased attention over the past 10–15 years. In this period, effort has been recognized as an important dimension of many everyday listening tasks, and there has been a diversification of approaches to measuring listening effort, including both subjective, behavioral, and physiological measures; for a recent review see McGarrigle et al. (2014).

There is to date no unified definition of the term listening effort, an issue which is addressed by the list of definitions provided in the article by Pichora-Fuller et al. 2016 (this issue pp. 5S–27S). An example of the ambiguity in the term’s usage is that listening effort sometimes refers to something resulting from a challenging listening task, whereas in other cases, it is used to denote the voluntary investment of processing resources in the sense of “putting effort into” listening. Furthermore, listening effort is sometimes used to denote a subjective experience, whereas in other cases, it refers to the objective level of the extent to which processing resources are consumed to accomplish the task. While all described ways of using “listening effort” are appropriate as such, inconsistencies and ambiguities in usage can complicate the interpretation of results within and across studies, see also Pichora-Fuller et al. 2016 (this issue, pp. 5S–27S) for a discussion.

In the present article, we consider listening effort an umbrella term for the following two types of effort, between which we will differentiate: The term perceived effort is used for subjective estimations of how taxing a listening task is or was. For this objective side, the amount of processing resources allocated to a task, we refer to as processing load, in agreement with the definition of cognitive load/ processing load provided in Table 1 in the article by Pichora-Fuller et al. 2016 (this issue pp 5S–27S). Furthermore, we use processing effort to refer to the extra resource allocation required for an individual to meet the listening-task goal, when the listening condition is adverse and there are obstacles to reach the task goal. While the term corresponds to what is referred to as mental effort in Table 1 in the article by Pichora-Fuller et al. 2016 (this issue pp 5S–27S), we use processing effort consistency with processing load and because the term mental has different readings in different research fields. For the notion of how willing a person is to put effort into task accomplishment, that is, the readiness to invest resources to accomplish a task goal, we use the term task engagement. Task engagement can be seen as resulting from mechanisms of conation (Phillips 2016, this issue, pp. 44S–51S) and motivation (Pichora-Fuller et al. 2016, this issue, pp. 5S–27S).

The present article provides a description of how the different types of effort may arise. While the described mechanisms may be applicable to many different kinds of listening situations, they are discussed here specifically for listening to and comprehending spoken language. Thus, the conceptual description is designed for situations of spoken communication, a key component of human interaction. The different functional levels of spoken communication have previously been defined as hearing, listening, comprehending, and communicating (World Health Organization 2001; Kiessling et al. 2003). Hearing refers to the passive process of sound reception by the ear. Listening on the other hand is defined as requiring the intent to perceive and thus requiring to direct attention toward a particular sound source. Comprehending refers to the deciphering of the speaker’s intended message and its contextual interpretation. Communicating denotes the multilateral exchange of information, turn taking, planning of utterances, and speech production. The described functions are organized in a strictly hierarchical manner, where each level requires the subordinate levels to operate effectively. The conceptual description presented in this article is restricted to the unilateral, receptive stages of the described cascade. Furthermore, an assumption underlying the presented concept is that processing load and perceived effort can only arise, when there is an intent to listen and comprehend, that is, not at the level of hearing but only at the higher functional stages. Note that Humes and Dubno (2010) have defined another term, namely “speech understanding,” which denotes the recognition or identification of open- or closed-set speech materials to the extent that the listener would be able to repeat the material. Speech understanding is more widely studied in hearing research than spoken-language comprehension. Unlike comprehension, understanding does not necessarily require semantic processing of the material or detection of the speaker’s intent. While the term speech understanding does not fit seamlessly into the hierarchy of stages of spoken communication, the conceptual description presented in this article does not differentiate between the levels of listening (to spoken language), understanding, and comprehending and should apply to all of them. Thus, while the term spoken-language comprehension will be used mostly in the concept descriptions, the other notions can be read implicitly.

Two factors play an essential role in the manifestation of effort. First, the processing resources of the human cognitive system, including perception, are limited in their capacity and are shared between activities (Kahneman 1973; see also Wingfield 2016, this issue, pp. 35S–43S). Second, tasks, including listening tasks, are often complicated by environmental factors that disturb an individual’s goal achievement by direct interference, distraction, or competition. The following section discusses how processing effort is created with the convergence of limited capacity and situational adversity; that is, as a result of the relationship between internal resources and external demands. The external demands are situational and can change quickly. However, most of the listener-internal auditory and cognitive processing resources required for the comprehension of spoken language can be assumed to be relatively stable across situations, while they may change over longer periods of time with increasing age (e.g., Pichora-Fuller 2003; Gordon-Salant et al. 2011; Anderson et al. 2013). Therefore, in the third part of this article, the presented conceptual description of listening effort is discussed from a lifetime perspective for adult ages with special attention for older listeners. While both auditory and cognitive abilities change with increasing age, this article is exclusively concerned with the typical cognitive processing difficulties at older ages.

FACTORS INFLUENCING LISTENING EFFORT

Listening and comprehension of spoken language can be performed in a large variety of situations that are defined by both listener-external environmental characteristics and listener-internal factors. Listener-external factors defining a listening situation can be described by a number of physical characteristics, including factors such as (1) the sound levels and numbers of the target source(s) and interfering source(s) and their relative ratio, (2) the frequency spectrum and temporal structure of the target and competing sound sources, (3) the acoustic properties of the room, such as reverberation, and (4) the spatial configuration of the sound sources, both relative to the listener and relative to each other. It can also be relevant whether the target source’s acoustic information is supported by other cues, such as visual or haptic information and whether there are many (sensory) distractions in the surrounding scene. Furthermore, for spoken-language comprehension, the language of the target speech and of potential interfering speech streams, as well as their accents, can be relevant aspects.

Listener-internal determinants of spoken-language comprehension include abilities of auditory processing and cognitive processing. Relevant aspects of auditory processing include audiometric thresholds (e.g., van Rooij & Plomp 1992; Divenyi et al. 2005), abilities to process supra-threshold spectral cues, (e.g., Larsby & Arlinger 1999; Lunner & Sundewall-Thorén 2007),such as frequency selectivity (e.g., Strelcyk & Dau 2009; Hopkins & Moore 2011), abilities to process supra-threshold temporal cues, such as temporal envelope (e.g., Purcell et al. 2004; Fogerty 2011; Ruggles et al. 2012), periodicity (e.g., Summers& Leek 1998; Vongpaisal & Pichora-Fuller 2007), fine-structure information (e.g., Ardoint et al. 2010; Jackson & Moore 2013; Ruggles et al. 2012), and binaural processing of interaural time and level differences (e.g., Blauert 1997; Goverts & Houtgast 2010; Glyde et al. 2013).

Cognitive processing involved in language comprehension includes both domain-specific linguistic abilities and general cognitive functions. Different from the highly automatic and task-demand independent auditory and phonological processing (Friederici 2011), language abilities involved in spoken-language comprehension are much more controlled. Current neurocognitive models describe distributed neural networks for realizing streams of information processing that include initial processing of syntactic structure, processing of semantic and grammatical relations, and processing of prosody (including intonational contour and accentuation of relevant words in a speech stream), as well as consecutive processes of information integration and interpretation (Hickok & Poeppel 2007; Friederici 2011). At this last stage, semantic and syntactic information is mapped onto representations of semantic memory (a person’s declarative general world knowledge including language representations) and checked for consistency. The outlined processes require continuous interactions of auditory bottom-up signal processing and cognitive top-down processing along context-driven expectations (e.g., Wingfield 2000).

In addition to language-processing abilities, spoken-language comprehension requires listening with intention and attention and thus additional higher-order cognitive functions. Semantic memory contributes to comprehension by providing access to stored linguistic knowledge, such as vocabulary, phonological, syntactic, and grammar representations, as well as by providing access to world knowledge, including norms of social context and rules of communication. Episodic memory is required to relate incoming information to past episodes of personal experience, including conversations and social interaction with specific people at specific times in specific places. Working memory is needed for temporarily holding information in mind and mentally working with it (Baddeley 1992). It allows the listener to relate incoming auditory information to representations of facts and episodes from semantic and episodic memory. This ability is indispensable for the comprehension of spoken language. Consequently, working memory has been studied more extensively than other cognitive functions in the broad literature on language comprehension (Daneman & Merikle 1996) and in the more specific literature on in cognitive hearing science (Rönnberg et al. 2013).

However, working memory needs to be seen in the context of other so-called executive functions. These refer to a family of top-down mental processes required for going from automatic instinct to controlled processing, including that one has to concentrate and pay attention (Miller & Cohen 2001; Diamond 2013). The executive core functions of working memory, inhibitory and interference control besides selective attention are essential for a person to remain focused as well as to flexibly adapt to changing circumstances. Thus, together they enable cognitive flexibility (Diamond 2013), which is, for instance, needed in the fast discourse of spoken communication, when a listener is confronted with comprehensive, competing, and/or distracting information. Furthermore, higher order executive functions, referring to a variety of processes such as planning, monitoring, evaluating, and reasoning, have a role in guiding attention, thought, and action according to the goals or intentions of a person in a given listening situation (Miller & Cohen 2001).

Finally, speed of information processing needs to be added to this long but nonetheless incomplete list of cognitive resources involved in listening and spoken-language comprehension. Processing speed is essential given that speech runs fast at rates of about 140 to 180 words-per-minute in conversations (Wingfield 2000). Also, words in fluent speech often need to be recognized before their full acoustic duration has been completed or even in hindsight (Marslen-Wilson 1987). This is only possible due to the continuous and fast bottom-up-and-top-down interaction of auditory and cognitive processing. Furthermore, time pressure increases processing load (Kahneman 2011) underlining the gate-keeping function of processing speed in listening tasks.

How Listening Becomes Effortful

The listener-internal abilities described in the previous sections interact with the listener-external factors defined by the physical setting of the listening situation. Arguably, the easiest listening situation is an entirely silent environment, where the only sound is coming from the source targeted by the act of listening and the listener has perfect hearing. However, most listening situations are nonideal and create adverse conditions for listening (see Mattys et al. 2012). The concept of listening effort is closely related to the notion of the situational adversity of listening. Any listening task requires some processing and thus the allocation of processing resources, with processing load being the amount of processing resources allocated to a task (see Introduction). It can be assumed that the physical setting of the situation determines the situational relevance of specific processing abilities and their level of activation. For example, temporal-envelope resolution may be relevant for speech understanding especially in the presence of fluctuating background sounds (e.g., George et al. 2007). Most likely, not all levels of processing load are effortful for the processing system. Listening can be pursued for longtime periods in nonchallenging situations, for instance, when listening and enjoying an audio book. Whereas for effortful listening over longer time periods often a state of subjective fatigue is described (Hockey 2013). Accordingly, fatigue may occur as a midterm or long-term consequence of effortful listening. For a taxonomy and discussion of fatigue concepts, please refer to Hornsby et al. (2016, this issue, pp. 136S–144S).

To understand how listening becomes effortful, it is helpful to (re)consider the concept of adversity. While it is true that many listener-external factors, such as background noise and reverberation, make a situation more challenging for all listeners, adversity is not created by the external factor as such but by its interaction with listener-internal abilities and goals. For example, the absolute sound level of the target source will determine adversity primarily in relation to the listener’s hearing thresholds. When target speech is presented to the listener at a very low or even negative sensation level, this will create adversity, because the unfavorable sensation level is an obstacle (see the FUEL term definitions, Pichora-Fuller et al., 2016, this issue, pp. 5S–27S) for the listener to reach their goal of understanding what is being said. Similarly, a particular language of communication will or will not create a challenge for the listener, depending on their proficiency in that language. Thus, the absolute characteristics of a listening condition have to be interpreted in relation to the corresponding processing options of the listener to determine the level of adversity they introduce. We define adversity as the mismatch between external demands and internal resources to meet these demands.* Furthermore, we define processing effort as the extra processing load, or amount of resources allocated to a task, when the listener tries to maintain the listening-task performance despite such a mismatch, see Figure 1 for an illustration. Thus, processing effort is a direct consequence of the pursuit of listening goals in adverse conditions. Such processing effort loads on executive resources, especially when the need for executive control is increased, for example, when going from rather automatic to more controlled processing (Kahneman 2011). According to Kahneman, this is the case in situations that are characterized by unfamiliarity, uncertainty, conflict or error, changing task demands, multiple tasks loading on or competing for the same processing resources, or by time pressure. Under such circumstances, processing load is usually increased due to the “extra” need for executive monitoring and control (e.g., Diamond 2013).

Fig. 1.
Fig. 1.:
Schematic illustration of processing load and processing effort for the same listening task in a normal (left) versus an effortful (right) listening situation. There is no temporal relationship between the left and the right part of the illustration. The upper part of the figure represents listener-external processing demands posed by the respective listening situation. The gray bars illustrate that listener-external demands in normal listening (left) are lower than in effortful listening (right). The lower part of the figure displays the listener-internal factors that play a role in listening. Light blue areas represent activation of executive control functions. Dark blue areas represent activation of other cognitive processing resources. Light blue and dark blue areas are the same for normal and effortful listening to indicate that basic resource allocation is the same in the two situations. The additional executive and other cognitive functions that are activated in an effortful situation are indicated by red areas. Red areas thus represent processing effort. Green areas represent activation of auditory processing functions, which is assumed to be constant for normal and effortful listening. Blue, green, and red areas together indicate the overall processing load. Purple areas represent the listener’s personal state, for example, physiological activation level, motivation to perform the task, level of stress, and emotional condition. These factors are assumed to be an underlying base of how processing resources are allocated. The arrow indicates that subjective fatigue is assumed to be a possible consequence of processing effort over time. The figure depicts processing of one listening task. Note that several tasks can be ongoing simultaneously.

The emergence of processing effort during a listening task does not necessarily imply that the listener perceives the listening situation as effortful. Perceived effort can arise at any level of processing load, and even during “noneffortful” processing. It is therefore important to realize that subjective measures of perceived effort and objective measures of processing load/effort are not necessarily congruent, as they assess different concepts. The objective assessment of processing effort is not an easy task. According to the definition of processing effort given in the Introduction, it would require the exact determination of the level of processing load, at which a mismatch between resources and demand arises, or the point at which adversity is created. To our knowledge, no such method exists. Therefore, processing load (i.e., the overall level or speed of resource allocation) is commonly used as a proxy for processing effort. For an overview of objective ways to assess processing load that are currently in use, see Pichora-Fuller et al. 2016 (this issue, pp. 5S–27S).

Personal State

One aspect of listening and comprehension that has been neglected in the discussion thus far is the listener’s personal state. The personal state may influence both processing load and perceived effort along several dimensions. First, the listener’s physiological state, as, for instance, reflected in tiredness or sleepiness and restrictions in how long attention can be sustained (Parasuraman et al. 1989), affects general alertness and potentially task engagement. Second, the listener’s motivation to reach the task goal influences task engagement and indirectly the expenditure of cognitive resources and the speed at which processing is performed. An illustration of this mechanism can be found in Figure 2 of the article by Pichora-Fuller et al. 2016 (this issue pp 5S–27S). Third, feelings of stress and the listener’s emotional state, such as feelings of anxiety, are factors that can influence listening effort, because they affect the cognitive control system, especially attentional control and other executive functions (e.g., Dolan 2002; Eysenck et al. 2007). For example, an increased stress level may lead to an increased overall level of cognitive activity and thus resource allocation, without any particular task being performed (Ursin & Eriksen 2004). Likewise, an emotional state like grief could presumably influence how effectively processing resources are allocated and the person’s willingness to allocate them to a specific task. Furthermore, the mere expectation that a listening situation will be challenging may lead to the anticipation of effort. Other social-psychological factors such as a person’s level of self-efficacy or perception of the social support from communication partners may influence how much effort a person expects to experience (Schunk 1995). In research studies of speech understanding, these factors are usually disregarded. However, they can be relevant because they determine at which level of external complexity listening becomes adverse for the person.

Description of Processes Within the Presented Concept

In situations without a specific listening goal, it is assumed that unintentional hearing takes place, but cognitive activation for listening is low and the environment is continuously, but minimally scanned with some kind of floating attention or alertness to potentially relevant events. In case of a relevant auditory event, it is hypothesized that focused attention is triggered in an automatic or intentional way (see Pichora-Fuller et al., 2016, this issue, pp. 5S–27S) and cognitive resources are activated to focus processing on the attended event. Directing attention is managed by the executive control system (Posner & DiGirolamo 1998; Posner & Dehaene 2000). Executive control manages the allocation of processing resources to different processing functions, taking into account the listening-task goal, the listening environment, and priorities of other ongoing tasks. For example, for spoken-language comprehension, more resources will be directed to language-processing functions than during music listening, and in the presence of temporally fluctuating competing streams, post-processing of outputs from auditory temporal-processing functions will be enhanced compared to situations with steady state background noise. Within the described concept, it is assumed that the outputs of auditory processing functions, such as frequency filters, are stable across listening conditions. However, these outputs may vary in how they serve cognitive functions depending on executive control decisions involving steering prioritizations and focusing driven by top-down processing activities.

The executive control system is also assumed to monitor continuously whether the allocated resources successfully meet all active task demands or whether priorities and timing of concurrent tasks need to be adjusted. Furthermore, scanning of the environment continues constantly, such that new relevant events can be integrated into the management of task performance by executive control. It can happen that the processing strategy employed by the executive control system fails to fulfill the demands posed by the listening task undertaken to achieve the listener’s goal, be it because outputs from one or several of the auditory or cognitive processing functions are insufficient, or because the strategy as such was poorly chosen. In such cases, a reorganization of processing can occur, compensating for the insufficiency. Such compensation strategies and adaptations in the allocation of resources may or may not lead to an overall increased allocation of cognitive resources. As illustrated in Figure 1, processing effort can partly arise due to increased activation of task-specific cognitive resources. However, a major part of processing effort is explained by increased activation in the executive control system.

LIFESPAN COGNITIVE DEVELOPMENT

Many of the factors influencing listening effort are prone to age effects. Here an overview of the differential lifespan development of cognitive resources for listening and spoken-language comprehension shall be provided. Also, we relate this knowledge to common theories of neurocognitive aging to develop an understanding when and how cognitive resources may constitute restricting or compensating factors for listening under adverse conditions.

Age Differentiation–Dedifferentiation

The development of cognitive abilities over the lifespan is often depicted by reverse U-shaped developmental trajectories (Craik & Bialystok 2008). During childhood, cognitive abilities evolve and development, including learning, results in increased ability levels as well as in associated increases in ability independence. The “age-differentiation hypothesis” suggests that cognitive functions become differentiated during late childhood and adolescence, and remain so throughout most of adulthood (Garrett 1946; Burt 1954). This hypothesis has been extended to later adulthood, when broad declines in ability levels and associated increases in ability interrelations are observed, suggesting that cognitive ability structures begin to “ dedifferentiate” toward older age (Spearman 1927; Baltes et al. 1980; Lindenberger & Baltes 1994). Despite such a common pattern of cognitive change across the lifespan, it has to be highlighted that huge interindividual differences between older people are observed in ability levels and the rate at which changes take place. In fact, ability differences are often bigger between two persons in the same age group than between persons of different age groups (Smith & Baltes 1996).

Differential Developments in Specific Cognitive Abilities

Cognitive abilities advance differentially across the lifespan. With regard to the list of cognitive functions for listening and spoken-language comprehension introduced earlier, a coarse overview of typical challenges and trajectories for individual abilities across the lifespan is provided. Language acquisition in child development is based on the accumulation of specific knowledge about the form and function of a language. Thus, a language system is developed that contains established representations, structures, and concepts to enable the mapping of abstract signals to meanings. Language representations are to a large extent part of semantic memory—a person’s declarative general knowledge of the world. Semantic memory representations increase enormously and rapidly during childhood, further accumulate during adulthood including older age, and remain relatively intact in older age (Nilsson 2003). There are observations that vocabulary is reduced in childhood and in older age. For children, this can be attributed to lack of knowledge. In contrast, in older age vocabulary decline is not understood as a decrease in language knowledge itself, but rather as a decrease in cognitive systems that support the access to this knowledge (Craik & Bialystok 2008). This is an example for the dichotomy of changes in cognitive representations and cognitive control that has been proposed to be a general mechanism of continuous cognitive lifespan changes (Craik & Bialystok 2006). Both factors, representations and control, as well as the interaction between them and their interaction with the current environment change in childhood, adult development, and older age. However, child development is predominantly characterized by growth and organization of knowledge representations in various domains (e.g., sensory-motor, procedural, declarative, episodic), whereas changes with aging are dominated by declining cognitive functioning in control functions to operate on the knowledge (Craik & Bialystok 2006,2008). These mechanisms evolve differently for different types of memory. Compared to semantic memory, episodic memory shows a more symmetrical inverse U-shape trajectory across lifespan, possibly because it is largely based on cognitive control processes, which have their highest efficiency in younger adults (Craik & Bialystok 2008). Poor episodic memory in older compared to younger adults is consistently reported as a most prominent effect of cognitive aging and seems to result primarily from deficient encoding of incoming information (Park & Gutchess 2005). Notably, also in this functional domain, differential results have been reported. Recent evidence suggests that when typical patterns for encoding deficiencies (weak activation in medio-temporal lobe areas) are accompanied by an increased activation of prefrontal areas, older adults’ performance at memory tasks is improved (Gutchess et al. 2005; Morcom et al. 2007; Davis et al. 2008). Similar results have led to a line of hypotheses on neurocognitive aging that is going to be summarized in the following section.

Frontal Compensation

Findings of stronger and/or additional activations of frontal areas in older compared to younger adults have been reported in several other domains such as motor performance (Heuninckx et al. 2008), visual processing (Grady 2000), and working memory (Cappell et al. 2010; Schneider-Garces et al. 2010). They indicate a general posterior-to-anterior shift in aging (PASA; Davis et al. 2008), as well as less lateralized recruitment of brain areas for maintaining performance levels [Hemispheric Asymery Reduction in Old Adults (HAROLD); Cabeza 2001]. These altered patterns of brain activity have been interpreted as compensatory for posterior-related under-activation and neuroanatomical decline associated with aging (Davis et al. 2008). Also, a hypothesis has been advanced that provides an explanation of how this might be realized in the brain since it proposes compensation-related utilization of neural circuits (CRUNCH; Reuter-Lorenz & Lustig 2005; Reuter-Lorenz & Cappell 2008). This perspective was further developed in the concept of scaffolding, which is central to a dominant integrative neurocognitive model of normal aging. The so-called Scaffolding Theory of Aging and Cognition (STAC) builds on neuroscientific and psychological theories of cognitive aging (Park & Reuter-Lorenz 2009; Reuter-Lorenz & Park 2010). The STAC model states that the aging brain responds to neural challenges (e.g., atrophy, neurotransmitter receptor reduction) by forging alternative neural circuits (scaffolds). Although these networks may work less efficiently than the original ones of young adulthood, they allow older individuals to preserve a high level of cognitive functioning.

In addition to the described age-related patterns of frontal brain over-activation for sustaining performance levels compared to younger adults, it has been shown that high task loads can induce upregulation in additional prefrontal cortical activity for adults at all ages (e.g., Braver et al. 2001; Mattay et al. 2006). We see these findings in line with our proposed conceptual description of listening effort and the conceptualization of adversity as a mismatch between task demands and personal resources that may lead to the allocation of additional cognitive resources and consequently higher processing load when listeners pursue task engagement. Furthermore, despite the fact that the concept of compensatory scaffolding originates from a theory of aging, it is proposed by its authors as a process that does not simply begin in older age, but rather one that occurs across the lifespan. Notably, Reuter-Lorenz and Park (2010) also state that it is affected by experiences so that new learning, engagement in a mentally challenging activity, and cognitive training might enhance the brain’s ability to build effective new scaffolding to maintain a high level of cognitive function. As such the ability for scaffolding can be interpreted as a general neural potential to adapt personal processing resources to meet task demands.

On a functional level compensatory over-activation of frontal and especially prefrontal brain regions is mainly associated with executive functions. These include a variety of abilities and denote the proposed general factor of cognitive control suggested as underlying cognitive age changes (Craik & Bialystok 2008). In general, all subcomponents of control follow a common pattern of maturation during childhood and adolescence and of decline in older age that parallels the development of frontal lobe functions (Hasher & Zacks 1988; Wingfield et al. 1988; Salthouse 1996; Stuss 2011). Thus, disregarding the relationship among the different subcomponents, changes in the functional integrity of the frontal brain are assumed to be a common cause underlying their functional decline (Li & Lindenberger 1999; Li et al. 2001). Hence, despite their compensatory role, for example, for other age-related cognitive changes or in cases of a mismatch between available cognitive resources and task demands, executive functions themselves are subject to decline and function less efficiently in older age (Reuter-Lorenz 2000; Raz 2008).

SUMMARY AND CONCLUSIONS

The term listening effort is used broadly but inconsistently in the literature. Such inconsistency increases the risk of controversy, misinterpretation of research results, and confusion in applying research findings to guide clinical practice. To support the development of a consensus on the concept, we propose to differentiate two subcomponents of listening effort, that is, processing effort and perceived effort. Distinguishing between these two types of listening effort helps to understand why the assessment of subjective compared to objective aspects of listening effort often leads to diverging results. We propose a conceptual description of how processing effort arises from the interaction of listener-external characteristics of the listening situation and listener-internal resources for auditory and cognitive processing, as well as the listener’s personal state. In the proposed conceptual description, cognitive resources are allocated depending on the listener’s processing capacities for listening and spoken-language comprehension under adverse conditions. Also, it is assumed that limited cognitive resources are flexibly allocated according to task demands pursued by the listener. Specific cognitive abilities and the ability to flexibly allocate cognitive resources are prone to differential age-related changes and neurocognitive reorganization. The presented conceptual description provides a basis to facilitate further discussion and a better understanding of the implications of these age-related changes. However, the conditions under which listening becomes adverse and processing effort is required for spoken-language comprehension have to be considered on an individual basis. The same holds for how the individual recruits extra cognitive resources to pursue specific goals as they perform listening tasks. Overall, the proposed conceptual description seems practical for integrating research results from different areas, for improving understanding of the complex concept of listening effort, and for guiding future research activities in this area.

ACKNOWLEDGMENTS

The authors thank Florine Bachmann for her assistance in organizing the references for this manuscript.

Both authors are employed at Phonak AG, Switzerland. The present work was conducted in Phonak’s research program Cognitive & Ecological Audiology at the Department of Science & Technology.

*Note that this definition of adversity is slightly different from the one given in Mattys et al. (2012), where adverse conditions are defined as “any factor leading to a decrease in speech intelligibility on a given task relative to the level of intelligibility when the same task is performed in optimal listening situations.”

REFERENCES

Anderson S., White-Schwoch T., Parbery-Clark A., et al. A dynamic auditory-cognitive system supports speech-in-noise perception in older adults. Hear Res, (2013). 300, 1832.
Ardoint M., Sheft S., Fleuriot P., et al. Perception of temporal fine-structure cues in speech with minimal envelope cues for listeners with mild-to-moderate hearing loss. Int J Audiol, (2010). 49, 823831.
Arlinger S., Lunner T., Lyxell B., et al. The emergence of cognitive hearing science. Scand J Psychol, (2009). 50, 371384.
Baddeley A. Working memory. Science, (1992). 255, 556559.
Baltes P. B., Cornelius S. W., Spiro A., et al. Integration versus differentiation of fluid/crystallized intelligence in old age. Dev Psychol, (1980). 16, 625635.
Blauert J. Spatial Hearing: The Psychophics of Human Sound Localization. (1997). Cambridge, MA: MIT Press.
Braver T. S., Barch D. M., Kelley W. M., et al. Direct comparison of prefrontal cortex regions engaged by working and long-term memory tasks. Neuroimage, (2001). 14(1 Pt 1)4859.
Burt C. The differentiation of intellectual abilities. Br J Educ Psychol, (1954). 24, 7690.
Cabeza R. Cognitive neuroscience of aging: Contributions of functional neuroimaging. Scand J Psychol, (2001). 42, 277286.
Cappell K. A., Gmeindl L., Reuter-Lorenz P. A. Age differences in prefontal recruitment during verbal working memory maintenance depend on memory load. Cortex, (2010). 46, 462473.
Cherry E. C. Some experiments on the recognition of speech, with one and with two ears. J Acoust Soc Am, (1953). 25, 975979.
Craik F. I., Bialystok E. Cognition through the lifespan: Mechanisms of change. Trends Cogn Sci, (2006). 10, 131138.
Craik F. I. M., Bialystok E. Craik F. I. M., Salthouse T. A., Lifespan cognitive development: The roles of representation and control. The Handbook of Aging and Cognition (pp. 557–601). (2008). New York, NY: Psychology Press.
Daneman M., Merikle P. M. Working memory and language comprehension: A meta-analysis. Psychon Bull Rev, (1996). 3, 422433.
Davis S. W., Dennis N. A., Daselaar S. M., et al. Que PASA? The posterior-anterior shift in aging. Cereb Cortex, (2008). 18, 12011209.
Diamond A. Executive functions. Annu Rev Psychol, (2013). 64, 135168.
Divenyi P. L., Stark P. B., Haupt K. M. Decline of speech understanding and auditory thresholds in the elderly. J Acoust Soc Am, (2005). 118, 10891100.
Dolan R. J. Emotion, cognition, and behavior. Science, (2002). 298, 11911194.
Downs D. W. Effects of hearing and use on speech discrimination and listening effort. J Speech Hear Disord, (1982). 47, 189193.
Eysenck M. W., Derakshan N., Santos R., et al. Anxiety and cognitive performance: Attentional control theory. Emotion, (2007). 7, 336353.
Fogerty D. Perceptual weighting of individual and concurrent cues for sentence intelligibility: Frequency, envelope, and fine structure. J Acoust Soc Am, (2011). 129, 977988.
Friederici A. D. The brain basis of language processing: From structure to function. Physiol Rev, (2011). 91, 13571392.
Garrett H. E. A developmental theory of intelligence. Am Psychol, (1946). 1, 372378.
George E. L., Zekveld A. A., Kramer S. E., et al. Auditory and nonauditory factors affecting speech reception in noise by older listeners. J Acoust Soc Am, (2007). 121, 23622375.
Glyde H., Buchholz J. M., Dillon H., et al. The importance of interaural time differences and level differences in spatial release from masking. J Acoust Soc Am, (2013). 134,29372945.
Gordon-Salant S., Fitzgibbons P. J., Yeni-Komshian G. H. Auditory temporal processing and aging: Implications for speech understanding of older people. Audiol Res, (2011). 1, e4.
Goverts S. T., Houtgast T. The binaural intelligibility level difference in hearing-impaired listeners: The role of supra-threshold deficits. J Acoust Soc Am, (2010). 127, 30733084.
Grady C. L. Functional brain imaging and age-related changes in cognition. Biol Psychol, (2000). 54, 259281.
Gutchess A. H., Welsh R. C., Hedden T., et al. Aging and the neural correlates of successful picture encoding: Frontal activations compensate for decreased medial-temporal activity. J Cogn Neurosci, (2005). 17, 8496.
Hasher L., Zacks R. T. Bower G. H., Working memory, comprehension, and aging: A review and a new view. The Psychology of Learning and Motivation (1988). New York, NY: Academic Press.193225.
Heuninckx S., Wenderoth N., Swinnen S. P. Systems neuroplasticity in the aging brain: Recruiting additional neural resources for successful motor performance in elderly persons. J Neurosci, (2008). 28, 9199.
Hickok G., Poeppel D. The cortical organization of speech processing. Nat Rev Neurosci, (2007). 8, 393402.
Hockey R. The Psychology of Fatigue: Work, Effort and Control. (2013). Cambridge: University Press.
Hopkins K., Moore B. C. The effects of age and cochlear hearing loss on temporal fine structure sensitivity, frequency selectivity, and speech reception in noise. J Acoust Soc Am, (2011). 130, 334349.
Hornsby B. W. Y., Naylor G., Bess F. H.A taxonomy of fatigue concepts and their relation to hearing loss. Ear Hear, (2016). 37, 134S144S.
Humes L. E., Dubno J. R. Gordon-Salant S., Frisina R. D., Popper A. N., Factors affecting speech understanding in older adults. The Aging Auditory System (2010). New York, NY: Springer.211257.
Jackson H. M., Moore B. C. Contribution of temporal fine structure information and fundamental frequency separation to intelligibility in a competing-speaker paradigm. J Acoust Soc Am, (2013). 133, 24212430.
Kahneman D. Attention and Effort. (1973). Englewood Cliffs, NJ: Prentice Hall.
Kahneman D. Thinking, Fast and Slow. (2011). New York, NY: Farrar, Straus and Giroux.
Kiessling J., Pichora-Fuller M. K., Gatehouse S., et al. Candidature for and delivery of audiological services: Special needs of older people. Int J Audiol, (2003). 42(Suppl 2), 2S92101.
Larsby B., Arlinger S. Auditory temporal and spectral resolution in normal and impaired hearing. J Am Acad Audiol, (1999). 10, 198210.
Li S. C., Lindenberger U. Eichenbaum H., Cross-level unification: A computational exploration of the link between deterioration of neurotransmitter systems and dedifferentiation of cognitive abilities in old age. Cognitive Neuroscience of Memory (1999). Ashland, OH: Hogrefe & Huber.103146.
Li S. C., Lindenberger U., Sikström S. Aging cognition: From neuromodulation to representation. Trends Cogn Sci, (2001). 5, 479486.
Lindenberger U., Baltes P. B. Sensory functioning and intelligence in old age: A strong connection. Psychol Aging, (1994). 9, 339355.
Lunner T., Sundewall-Thorén E. Interactions between cognition, compression, and listening conditions: Effects on speech-in-noise performance in a two-channel hearing aid. J Am Acad Audiol, (2007). 18, 604617.
Marslen-Wilson W. D. Functional parallelism in spoken word-recognition. Cognition, (1987). 25, 71102.
Mattay V. S., Fera F., Tessitore A., et al. Neurophysiological correlates of age-related changes in working memory capacity. Neurosci Lett, (2006). 392, 3237.
Mattys S. L., Davis M. H., Bradlow A. R., et al. Speech recognition in adverse conditions: A review. Lang Cogn Process, (2012). 27, 953978.
McGarrigle R., Munro K. J., Dawes P., et al. Listening effort and fatigue: What exactly are we measuring? A British Society of Audiology Cognition in Hearing Special Interest Group ‘white paper’. Int J Audiol, (2014). 53, 433440.
Miller E. K., Cohen J. D. An integrative theory of prefrontal cortex function. Annu Rev Neurosci, (2001). 24, 167202.
Morcom A. M., Li J., Rugg M. D. Age effects on the neural correlates of episodic retrieval: Increased cortical recruitment with matched performance. Cereb Cortex, (2007). 17, 24912506.
Nilsson L. G. Memory function in normal aging. Acta Neurol Scand, (2003). 107, 713.
Parasuraman R., Nestor P., Greenwood P. Sustained-attention capacity in young and older adults. Psychol Aging, (1989). 4, 339345.
Park D. C., Gutchess A. Cabeza R., Nyberg L., Park D., Long-term memory and aging: A cognitive neuroscience perspective. Cognitive Neuroscience of Aging: Linking Cognitive and Cerebral Aging (2005). New York, NY: Oxford University Press.218245.
Park D. C., Reuter-Lorenz P. The adaptive brain: Aging and neurocognitive scaffolding. Annu Rev Psychol, (2009). 60, 173196.
Phillips N.The implications of cognitive aging for listening and the FUEL model. Ear Hear, (2016). 37, 44S51S.
    Pichora-Fuller M. K. Cognitive aging and auditory information processing. Int J Audiol, (2003). 42(Suppl 2), 2S2632.
    Pichora-Fuller M. K., Kramer S. E., Eckert M., et al. Hearing impairment and cognitive energy: A framework for understanding effortful listening (FUEL). Ear Hear, (2016). 37, 92S100S.
      Posner M. I., Dehaene S. Gazzaniga M. S., Attentional networks. Cognitive Neuroscience - A Reader. (2000). Oxford, UK: Blackwell Publishers Ltd.
      Posner M. I., DiGirolamo G. J. Parasuraman R., Executive attention: Conflict, target detection, and cognitive control. The Attentive Brain (1998). Cambridge, MA: MIT Press.401423.
      Purcell D. W., John S. M., Schneider B. A., et al. Human temporal auditory acuity as assessed by envelope following responses. J Acoust Soc Am, (2004). 116, 35813593.
      Raz N. Craik F. I., Salthouse T. A., Aging of the brain and its impact on cognitive performance: Integration of structural and functional findings. The Handbook of Aging and Cognition (2008). Mahwah, NJ: Lawrence Erlbaum Associates Publishers.190.
      Reuter-Lorenz P. A. Schwarz N., Cognitive neuropsychology of the aging brain. Cognitive Aging: A Primer (2000). New York, NY: Psychology Press.93114.
      Reuter-Lorenz P. A., Cappell K. A. Neurocognitive aging and the compensation hypothesis. Curr Dir Psychol Sci, (2008). 17, 177182.
      Reuter-Lorenz P. A., Lustig C. Brain aging: Reorganizing discoveries about the aging mind. Curr Opin Neurobiol, (2005). 15, 245251.
      Reuter-Lorenz P. A., Park D. C. Human neuroscience and the aging mind: A new look at old problems. J Gerontol B Psychol Sci Soc Sci, (2010). 65, 405415.
      Rönnberg J., Lunner T., Zekveld A., et al. The Ease of Language Understanding (ELU) model: Theoretical, empirical, and clinical advances. Front Syst Neurosci, (2013). 7, 31.
      Ruggles D., Bharadwaj H., Shinn-Cunningham B. G. Why middle-aged listeners have trouble hearing in everyday settings. Curr Biol, (2012). 22, 14171422.
      Salthouse T. A. The processing-speed theory of adult age differences in cognition. Psychol Rev, (1996). 103, 403428.
      Schneider-Garces N. J., Gordon B. A., Brumback-Peltz C. R., et al. Span, CRUNCH, and beyond: Working memory capacity and the aging brain. J Cogn Neurosci, (2010). 22, 655669.
      Schunk D. H. Self-efficacy, motivation, and performance. J Appl Sport Psychol, (1995). 7, 112137.
      Smith J., Baltes P. B. Mayer K. U., Baltes P. B., Altern aus psychologischer Perspektive: Trends und Profile im hohen Alter. Die Berliner Altersstudie (1996). Berlin: Akademie Verlag.221250.
      Spearman C. The Abilities of Man, Their Nature and Measurement. (1927). London: Macmillan and Co., Limited.
      Strelcyk O., Dau T. Relations between frequency selectivity, temporal fine-structure processing, and speech reception in impaired hearing. J Acoust Soc Am, (2009). 125, 33283345.
      Stuss D. T. Functions of the frontal lobes: Relation to executive functions. J Int Neuropsychol Soc, (2011). 17, 759765.
      Summers V., Leek M. R. FO processing and the separation of competing speech signals by listeners with normal hearing and with hearing loss. J Speech Lang Hear Res, (1998). 41, 12941306.
      Ursin H., Eriksen H. R. The cognitive activation theory of stress. Psychoneuroendocrinology, (2004). 29, 567592.
      van Rooij J. C., Plomp R. Auditive and cognitive factors in speech perception by elderly listeners. III. Additional data and final discussion. J Acoust Soc Am, (1992). 91, 10281033.
      Vongpaisal T., Pichora-Fuller M. K. Effect of age on F0 difference limen and concurrent vowel identification. J Speech Lang Hear Res, (2007). 50, 11391156.
      Wingfield A. Park D., Schwarz N., Speech perception and the comprehension of spoken language in adult aging. Cognitive Aging: A Primer (2000). Philadelphia, PA: Psychology Press.175196.
      Wingfield A.The evolution of models of working memory and cognitive resources. Ear Hear, (2016). 37, 35S43S.
        Wingfield A., Stine E. A., Lahar C. J., et al. Does the capacity of working memory change with age? Exp Aging Res, (1988). 14, 103107.
        World Health Organization. International Classification of Functioning, Disability and Health (ICF). (2001). Geneva, Switzerland: World Health Organization.
        Keywords:

        Adverse listening; Cognitive aging; Cognitive load; Cognitive resources; Listening effort; Spoken-language comprehension

        Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved