Emotion communication refers to verbal expressions and nonverbal behaviors that communicate an internal emotional or affective state (Davitz 1964) and is a fundamental component of interpersonal relationships, encompassing facial, gestural, and vocal expression. Individuals are forced to rely on the voice alone in many scenarios inclusive of physical factors (e.g., telephone; dim lighting), as well as social factors (e.g., cultural norms around eye contact). Because hearing impairment is the most common sensory deficit globally and a leading cause of disability (WHO 2008), there is significant value in considering the role of hearing loss in emotion communication, both in terms of the psychosocial sequelae associated with hearing loss and for hearing performance in listening contexts that contain environmental and speech signals that contain emotion information or evoke responses from emotion systems.
Emotional Impact of Hearing Loss
To date, the topic of emotion in hearing research has largely focused on mental health or subjective well-being in either those who experience hearing loss or their significant others. The general observation of this line of work is that as hearing loss increases, patients tend to report poorer outcomes on self-report questionnaires assessing emotional impact (e.g., Newman et al. 1990 ; Garstecki & Erler 1996 ; Scherer & Frisina 1998), quality of life (a multidimensional construct, a factor of which includes emotional well-being; Dalton et al. 2003 ; Tambs 2004 ; Ciorba et al. 2012), and depression (e.g., Cacciatore et al. 1999 ; Naramura et al. 1999 ; Wallhagen et al. 2004).
The emotional impact of hearing loss is not limited to individuals experiencing the loss of hearing but extends to significant others. This observation has been explored in a number of studies, with most research focused on either the spouses of patients with hearing loss (e.g., Hétu et al. 1993 ; Stephens et al. 1995 ; Scarinci et al. 2009 ; Preminger & Meeks 2010 ; Barker et al. 2017) or the parents of children with hearing loss (Moses 1983 ; Hintermair 2000 ; Yoshinaga-Itano & Abdala de Uzcategui 2001 ; Kurtzer-White & Luterman 2003). Such impacts may include increased sadness, anger, fear, resentment, guilt, grief, frustration, irritation, lower quality of life, and decreased intimacy and bonding.
Listening in Environments That Contain Vocal Emotion
In contrast to research investigating the emotional impact of hearing loss, one emerging topic in hearing research is to better understand communication experiences in listening environments that contain vocal emotion. Stimuli in such studies typically include semantically neutral speech spoken with emotion (e.g., anger, sadness, fear, happiness) or vocalizations and sounds that typically elicit an emotional reaction (e.g., a baby crying, a siren). Dupuis and Pichora-Fuller (2014) found that vocal emotion, depending on the specific emotion, can either improve or worsen speech intelligibility performance in younger and older adult listeners with normal or near-normal audiometric thresholds. For adults with acquired mild to severe hearing loss, Picou (2016) observed that compared with normal hearing listeners, listeners with hearing loss reported smaller differences in ratings (i.e., less range) between high- and low-valence tokens and an overall reduction in valence ratings at high signal presentation levels. Deficits in emotion identification have also been observed in children with hearing loss (Dyck et al. 2004). For example, relative to children with normal hearing, children with hearing loss identify emotions of semantically neutral statements less accurately (Most & Aviner 2009 ; Most & Michaelis 2012). In one study, children with normal hearing were better at identifying emotion in an audiovisual compared to a visual-only condition, while those with hearing loss did not show a difference, suggesting that they did not gain any additional information from auditory cues (Most & Aviner 2009). For individuals with cochlear implants, both children and adults with cochlear implants perform more poorly on auditory emotion identification tasks compared with age-matched controls with normal hearing (Hopyan-Misakyan et al. 2009 ; Chatterjee et al. 2015).
Although there has been research investigating objective listening deficits exhibited by hard-of-hearing individuals when listening to signals that contain emotion information, less is known about the extent to which individuals with hearing loss experience self-perceived communication difficulties when listening to signals that contain emotion information. Broadly, self-report measures represent the most commonly used tool to characterize function and disability, psychological impact, activity capabilities and associated activity limitations, impact on participation in everyday listening situations, and/or environmental and personal factors that modulate experiences associated with health conditions (WHO 2001). While there are many self-report questionnaires designed to evaluate the experiences of hearing, some of which that probe emotional consequences associated with hearing loss, to our knowledge, extant questionnaires do not focus on experienced hearing handicap when listening to signals that contain emotion information.* To facilitate research on the role of vocal emotion in communication, the purpose of this research was to develop and evaluate the Emotional Communication in Hearing Questionnaire (EMO-CHeQ), a self-report questionnaire that assesses experiences of hearing and handicap when listening to signals that contain emotion information. Specifically, our objectives are to generate and evaluate items for the EMO-CHeQ, determine if the EMO-CHeQ differentiates groups based on self-reported hearing loss, and evaluate if the EMO-CHeQ distinguishes hearing groups on the basis of emotion-identification performance. Such a questionnaire could potentially support hearing research by enabling the means to quantify the extent to which communication in environments that contain emotion information is problematic or not and potentially be used as an outcome measure to assess possible benefits associated with rehabilitation interventions.
Development and evaluation of the EMO-CHeQ was a multistep process. The steps consisted of an informal discussion group, a content validity check, a crowdsourcing (i.e., the practice of obtaining information or input on a task by enlisting the participation of a large number of individuals via the internet) evaluation of potential items (study 1), and a behavioral evaluation (study 2) of the final questionnaire. Each of the steps is described below.
Informal Discussion Group
Following a review of the literature and extant self-report questionnaires concerning consequences of hearing loss, the authors developed discussion prompts to facilitate a group discussion. The goal of this conversation was to better understand experiences of listening when communicating in environments that contain signals that convey emotion information. Also developed was a list of potential items to be included in the EMO-CHeQ and which were also discussed by the group. The list of potential items was adapted from existing questionnaires so as to be relevant for emotion communication. These included modified versions of one question from the Self-Assessment of Communication questionnaire, seven questions from the Hearing Handicap Inventory for the Elderly (Ventry & Weinstein 1982), and one question from the Speech, Spatial, and Qualities of Hearing scale. Participants received a small honorarium ($50) in appreciation of their time.
The discussion group consisted of two facilitators and four individuals (aged 70 to 74 years) recruited from the Ryerson University SMART lab participant pool. Participants included one female with severe hearing loss, two males with moderate hearing loss, and one female significant other. The significant other had extensive experience living with people who are hard of hearing. In addition to her experience with her partner, both her mother and father were experienced users of hearing aids.
The discussion group consisted of two parts. In the first part, participants engaged in a facilitator-led guided but open-ended discussion about their experiences of emotion communication difficulties. In the second part of the discussion, participants were asked to review and provide feedback of the list of items to potentially be included in the EMO-CHeQ. Based on the discussion, five broad themes relevant to the experience of emotion communication difficulties in the context of hearing loss were identified. The themes were (1) situational factors that impact emotion identification, (2) speaker characteristics that affect emotion identification, (3) production of emotional content in speech, (4) impact of emotion communication difficulties on social interactions, and (5) emotional impact and quality of life. A sixth theme related to “mood” was added. In comparison to emotions, moods are less specific, less intense, and less likely to be triggered by a particular stimulus (Ekkekakis 2013). Items for theme 6 (mood) were modified from items in each of the five aforementioned (emotion) themes.
Content Validity Evaluation
Informed by the literature review and discussion group, the authors generated 45 items that could potentially be included in the EMO-CHeQ questionnaire. Content validity was evaluated by nine content experts, exceeding the recommended minimum of 3 to 5 content experts suggested by Lynn (1986). The research team concluded that the content experts should include individuals with recognized expertise in at least one of the following areas: hearing and hearing rehabilitation, emotion, and questionnaire development. It was also evaluated that the content experts should represent researchers, clinicians, and patients. The research team nominated individuals and contacted them directly to participate in the development of the EMO-CHeQ. Content experts provided independent ratings using an online questionnaire programmed in SurveyMonkey (www.surveymonkey.com). One content expert with expertise in emotion and hearing attended the discussion group. The content experts included 5 researchers working in cognate fields within University-based labs (one of whom is also a hard-of-hearing person), one hearing scientist working in an industry-based lab, two audiologists, and one hard-of-hearing person). The content experts were asked to rate each item of the survey in terms of its relevance, clarity, simplicity, and ambiguity using a four-point response scale using the content validity index (CVI; DeVellis 1991 ; Polit & Hungler 1999 ; Smith et al. 2011). Relevance refers to how applicable the item is to assessing the topic of a questionnaire. Clarity refers to how clearly the item was written. Simplicity refers to how straightforward the item is to understand. Ambiguity refers to the extent to which one can interpret the meaning of the item. CVI scoring is accomplished by calculating the mean proportion for each of the four components across all of the content experts, and revisions are suggested for items below 0.75. The mean CVI ratings were excellent (relevance = 0.89, clarity = 0.85, simplicity = 0.97, and ambiguity = 0.84). Mean scores for all items can be found in Appendix A, Supplemental Digital Content 1, http://links.lww.com/EANDH/A445. Inter-rater reliability was assessed using an intraclass correlation coefficient (ICC = 0.61, p < 0.001). Fleiss (1986) describes ICC values from 0.40 to 0.75 as “fair to good” (p. 7), thus the 45-item version of the EMO-CHeQ exceeded the minimum acceptable value he proposes but is less than the value of >0.75 proposed by Streiner and Norman (1995). The ICC scores observed on the 45 items of the questionnaire may reflect, in part, the heterogeneity of the backgrounds of the content experts (i.e., researchers, clinicians, and patients). As per the suggestions of the content experts, of the 45 questions included in the CVI analysis, 36 items were retained in their original form, six items were modified, and three items were excluded. Some of the feedback provided by the content experts included recommendations to (1) refine items so that they describe situations and experiences with as much specificity as possible, (2) eliminate items that were unclear or confusing (e.g., we deleted an item that focused on the ability to control one’s own voice to conceal expressed vocal emotion), and (3) ensure consistency of question formats (e.g., consistently focusing on difficulties/challenges rather than abilities). In total, the 42-item version of the EMO-CHeQ included six to nine items for each of the themes.
STUDY 1: CROWDSOURCING-BASED EVALUATION OF THE EMO-CHEQ
There were two overall objectives in this study. The first was to better understand the underlying factor structure of the EMO-CHeQ questionnaire and to determine the relative contribution of each of the items in the questionnaire. Completing these objectives would provide support regarding the content and construct validity of the EMO-CHeQ. The second objective was to determine if there are differences in reported handicap on the EMO-CHeQ between individuals with self-reported normal hearing, unaided listeners with hearing loss, and aided listeners with hearing loss. As it relates to the EMO-CHeQ, we define handicap as the extent to which communication is impaired in situations where a listener wishes to understand vocal emotion information present in the environment.
Materials and Methods
The 42-item version of the questionnaire was evaluated using an internet-based survey methodology through Amazon Mechanical Turk (AMTurk), an online labor market. We recruited groups of participants with (1) self-reported normal hearing (NH), (2) self-reported hearing impairment confirmed by a health professional and who did not own or use hearing aids (HI), and (3) self-reported hearing impairment and ownership and use of hearing aids (HA). The survey took approximately 20 to 25 min to complete. Participants received an honorarium of $1 in compensation for their time.
In total, 1030 respondents completed the questionnaire. We inspected the data for quality assurance and eliminated respondents with identical IP addresses (n = 326), participants who completed less than 80% of the survey items (n = 86), and those who had inconsistent responses across the survey items (e.g., indicating a hearing impairment when completing the version of the survey intended for individuals with self-reported normal hearing). A total of 444 participants were removed. The final sample consisted of 586 participants (322 male, 260 female, and 4 participants who did not wish to answer) and included 243 individuals in the NH group, 193 individuals in the HI group, and 150 individuals in the HA group (see Table 1).
For the crowdsourcing component of the study, all participants were asked to complete a brief demographics questionnaire, the 10-item screening version of the Hearing Handicap Inventory for Adults (HHIA-S; Newman et al. 1991) and the 42-item version of the EMO-CHeQ. For the HHIA-S, respondents rate the extent to which different situations result in hearing-related problems on a scale with three response options (No, Sometimes, and Yes). The HHIA-S was used to characterize the sample and to assess whether the EMO-CHeQ assesses phenomena other than that assessed by the HHIA-S. For the EMO-CHeQ, participants were asked to select a level of agreement on a five-point Likert scale ranging from strongly disagree (scored as 1) to strongly agree (scored as 5). Participants who reported owning and using hearing aids were instructed to complete the EMO-CHeQ as though they were wearing their hearing aids and also completed a questionnaire to assess their satisfaction with their current hearing aids (the Satisfaction with Amplification in Daily Life [SADL] questionnaire; Cox & Alexander 1999). For the SADL, respondents provide ratings on a seven-point scale ranging from Not at all to Tremendously. The SADL was used to assess whether individuals relatively satisfied with their hearing aids exhibited less handicap on the EMO-CHeQ.
To establish the existence of group differences on the EMO-CHeQ, a univariate analysis of variance was conducted where hearing status (NH, HI, and HA) and age (younger and older) were between-subject variables and total score on the EMO-CHeQ was the dependent variable. Younger adults were defined as individuals aged 18 to 64 years old, and older adults were defined as individuals 65 years of age or older. Consistent with past work in health science research, parametric statistical analyses were employed for Likert scale data, as previous work suggests parametric statistics are adequately robust to the violation of the assumption regarding the appropriateness of parametric statistics for ordinal data (Norman 2010). Post hoc testing was conducted using the Student–Newman–Keuls method (p < 0.05 corrected for family-wise error). Associations between variables were determined using Pearson bivariate correlations. To determine if there was a group difference on the EMO-CHeQ between those who were relatively satisfied and those who were relatively dissatisfied with their hearing aids, a one-way analysis of variance was conducted. To better understand the factor structure of the EMO-CHeQ, an exploratory factor analysis (EFA) was conducted. Prior to conducting the EFA, item intercorrelations were inspected to ensure that no items had correlations that were either too high (>0.9) or too low (<0.3). EFA was conducted using oblique rotation (Direct Oblimin) as we assumed that the factors would be correlated with each other. All statistical analyses were conducted using IBM SPSS statistics software (version 24).
Exploratory Factor Analysis
The first objective of study 1 was to better understand the underlying factor structure of the EMO-CHeQ. One of the original scale items emerged as being clearly anomalous with correlations of 0.3 or lower and was thus removed from all subsequent analyses. The Kaiser–Mayer–Olkin test confirmed the sampling adequacy for the EFA analysis, KMO = 0.97 (“superb” per Field 2005). Bartlett’s test of sphericity showed that the correlation matrix was significantly different from the identity matrix and thus factorable for EFA, χ 2 (dF = 861) = 18,323.56, p < 0.001. We used Kaiser’s criterion of extracting components with eigenvalues above 1 (Kaiser 1960 as cited in Field 2009). A total of five components were extracted; however, after inspecting the screen plot and factor loadings, we re-ran the analysis with four factors, which explained a total of 66.3% of the variance.
Stevens (2002, as cited in Field 2009) recommends retaining items with component loadings of 0.4 and higher; however, a more conservative cutoff (0.58) was selected for two reasons. First, there were (at least) two strong loadings (0.58 or better) for each of the four components (Tabachnick & Fidell 2001). Second, we were motivated to develop a concise instrument to minimize the time necessary to administer the questionnaire. This resulted in a final scale with 16 items; five items related to characteristics of talkers, four items related to speech production, two items related to aspects of listening situations, and five items related to socioemotional well-being (see Table 2; Appendix B, Supplemental Digital Content 2, http://links.lww.com/EANDH/A446). Factor correlations are presented in Table 3. Mean EMO-CHeQ scores collapsed across the participants, item-scale, and item-subscale correlations are provided in Appendix C, Supplemental Digital Content 3, http://links.lww.com/EANDH/A447.
For the 16 items retained on the EMO-CHeQ, items strongly loaded on only a single factor (see Table 2). We also present the component loading scores for the items rejected for inclusion on the 16-item version of the EMO-CHeQ (see Table 2, items in italicized font). All rejected items had component loading scores ≤0.56 except items xxiii to xxvi, which consisted of four items related to mood. Because moods are less specific, less intense, and less likely to be triggered by a particular stimulus, a decision was made to exclude mood-related items on the EMO-CHeQ. Of the 26 items not included in the final version of the EMO-CHeQ, 10 items inquired about aspects of listening situations, (e.g., when lighting is dim, when talkers are not facing each other), seven items inquired about mood (e.g., difficulty identifying the mood of others in conversation, misinterpretation of the respondent’s mood by others), six items inquired about the impact of emotion communication difficulties on social interactions (e.g., missing subtle emotional speech cues in important conversations with professionals such as doctors and lawyers), two items inquired about characteristics of the talker (e.g., speech uttered by children), and one item inquired about speech production (i.e., monitoring the loudness of one’s voice in a manner appropriate for an intended emotion).
The second objective of study 1 was to determine whether differences are present in scores on the EMO-CHeQ for individuals with self-reported NH, HI, and HA (see Fig. 1). Overall, a main effect of hearing status [F(2, 580) = 122.19, p < 0.001] was observed. Post hoc testing revealed that although no difference was observed between the HI (M = 3.3, SD = 0.8) and HA (M = 3.2, SD = 0.8) groups, both groups reported significantly more total handicap than the NH group (M = 2.3, SD = 0.8; p < 0.05). No other significant effects were observed.
An analysis was conducted to elucidate why the HI and HA groups reported similar mean handicap scores on the EMO-CHeQ. One obvious possibility is that the HA participants were simply not benefiting from their hearing aids. Given that this was an internet-based study, we were not able to assess quality of the hearing aid fittings directly. SADL scores suggested that participants were generally satisfied with their hearing instruments (M = 4.4, SD = 0.7, min = 3.9, max = 6.3). Observation of a significant negative correlation between SADL scores and EMO-CHeQ scores would suggest that communication handicap in environments containing vocal emotion may be lessened by the provision of hearing aids that result in high end-user satisfaction. However, a correlation of −0.13 ns was observed. This finding suggests that even those participants reporting high hearing aid satisfaction do not benefit from hearing aid use with regard to vocal emotion communication.
Overall, it appears that the 16-item version of the EMO-CHeQ shows good internal reliability as indicated by high Cronbach’s alpha values. Cronbach’s α = 0.90 for the Talker characteristics subscale, α = 0.89 for the speech production subscale, α = 0.82 for the situational factors subscale, and α = 0.94 for the socio-emotional well-being subscale.
We calculated Pearson bivariate correlations between the EMO-CHeQ, the HHIA-S, and the SADL (see Table 4), as well as correlations between the EMO-CHeQ and HHIA-S for each of the six participant groups (see Table 1). Although no significant correlations were observed between the SADL and either the EMO-CHeQ and HHIA-S, a significant correlation (r = 0.66, p < 0.001) was observed between the EMO-CHeQ and HHIA-S. These findings suggest that while there is overlap regarding the two measures, the HHIA-S and EMO-CHeQ do not measure the same underlying set of factors.
STUDY 2: LABORATORY EVALUATION OF THE EMO-CHEQ
We had two principle objectives in study 2. The first was to assess and evaluate the 16-item† version of the EMO-CHeQ in groups of participants with audiometrically verified hearing status—i.e., good hearing and impaired hearing in unaided and aided listening conditions. The second was to determine whether groups with different reported handicap on the EMO-CHeQ demonstrate significantly different performance on an objective test assessing emotion-identification performance of speech spoken with emotion in audiovisual and audio-only test conditions. Observing significant differences on an emotion-identification task for groups of participants with different performance profiles on the EMO-CHeQ would provide additional support concerning the content validity of the EMO-CHeQ to assess some of the emotion-related difficulties individuals may encounter in everyday listening situations.
Materials and Methods
The 16-item version of the EMO-CHeQ was evaluated in a laboratory-based study in three groups of listeners: individuals with normal or near-normal audiometric thresholds (NH/nNH), hearing impairment and who did not own or use hearing aids (HI), and hearing impairment and who did own and use hearing aids (HA). All participants completed a demographics form, pure-tone audiometric testing, two questionnaires (HHIA and EMO-CHeQ), and an emotion-identification task in two conditions: (1) audiovisual and (2) audio only. The HA group completed the emotion-identification task while wearing their own hearing aids. Participants completed all activities in approximately 2 to 3 hrs. Participants received an honorarium of $15/hr in compensation for their time.
Forty older adult participants were recruited from the Ryerson University SMART lab participant pool to participate in the study. Eligibility criteria for the study consisted of native English speakers who learned English before the age of 5 years, having experienced no recent changes to their hearing, and who scored >26 on the Montreal Cognitive Assessment, a short questionnaire designed to screen for possible mild cognitive impairment (Nasreddine et al. 2005). Eight individuals failed to pass the Montreal Cognitive Assessment, and thus the final sample consisted of 32 participants (see Table 5). Eligibility criteria for NH/nNH group were audiometric thresholds 35 dB HL or less between 500 and 3000 Hz in both ears. Although the eligibility criteria for the NH/nNH group is higher than typical standard criteria used in clinical practice, the decision to select 35 dB HL was based in part on screening recommendations developed by Davis et al. (2007) who suggest using 35 dB HL as the cutoff for screening for effectiveness of hearing aids to provide benefit to people with hearing loss. The eligibility criteria for the HI group was audiometric thresholds of 40 dB HL or higher between 500 and 3000 Hz at a minimum of one test frequency in both ears (see Fig. 2). Eligibility criteria for the participants in the HA group also included being a regular (e.g., daily) user of bilateral hearing aids obtained within the previous 4-year period. Of the 10 participants in the HA group, seven individuals wore bilateral receiver in-the-canal (RIC) behind-the-ear (BTE), two wore bilateral in-the-canal (ITC), and one wore bilateral completely-in-the-canal (CIC) style hearing aids.
All participants were asked to complete a brief demographics questionnaire, the HHIA, and the 16-item version of the EMO-CHeQ. The emotion-identification task involved listening to audio files or watching and listening to audiovisual clips. Stimuli were taken from the Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), a free corpus of validated emotional stimuli (Livingstone & Russo, 2018; http://smartlaboratory.org/ravdess). Stimuli were presented in two blocks, with audio files in the first block and video files in the second block. Files included 96 unique stimuli, 48 videos, and 48 audio clips, performed by 12 female and 12 male actors. In each clip, an actor expressed one of eight emotions: happy, sad, calm, neutral, angry, surprised, fearful, and disgusted. Stimuli were presented using the PsyScope testing software on a 27” iMac computer. Sound levels were presented over a single Phillips SoundShooter portable loudspeaker located at 0° azimuth directly in front of the participant when seated at a distance of 30 cm and calibrated to best match levels encountered in a natural environment, ranging from 50 dB SPL for the quietest sound file (calm) to 88 dB SPL for the loudest sound file (angry). All testing was conducted in a well-lit double-walled sound attenuating chamber (Industrial Acoustics Company). After stimulus presentation, participants were presented with eight options corresponding to the eight possible emotions portrayed by the actors, plus a none of the above option. The listener’s task was to identify the emotion portrayed in the stimulus using a keyboard. All participants completed practice trials, resulting in reported comfort with the task (typically 5–8) before completing the experimental trials.
For the participants in the HA group, electroacoustic measurements of their hearing aids were conducted using an Audioscan Verfit hearing aid analyzer. Real ear testing of the fittings was completed using the International Speech Test Signal (Holube 2006) speech stimulus presented at 65 dB SPL. Measurements were not collected for two of the participants. Hearing aid output was compared relative to NAL-NL2 (Keidser et al. 2011) prescriptive targets (see Fig. 3). Overall, the mean output to target frequency response was −6.0 dB SPL at 0.25, 0.5, 1.0, 1.5, 2, 3, 4, and 6 kHz, −5.6 dB SPL at 0.5, 1, 1.5, 2, and 3 kHz, and −6.2 dB SPL at 0.5, 1, and 2 kHz, levels within the ±8 dB criterion established by Polonenko et al. (2010).
To determine if there were differences on the EMO-CHeQ scale and subscales, five univariate analyses of variance were conducted where hearing status (NH, HI, and HA) was a between-subjects variable for each of the analyses and the total score on the EMO-CHeQ and each of the four subscales were the dependent variable, respectively. Post hoc testing was conducted using the Student–Newman–Keuls method (p < 0.05 corrected for family-wise error). Associations between variables were determined using Pearson bivariate correlations. To assess internal reliability, Cronbach’s alpha was computed for the EMO-CHeQ subscales. To determine if there were differences in emotion-identification performance in the audio and audiovisual conditions, two univariate analyses of variance were conducted where hearing status (NH/nNH, HI, and HA) was a between-subjects variable for each of the analyses and performance on the emotion-identification task in the audio and audiovisual condition was the dependent variable, respectively. Post hoc testing was conducted using the Student–Newman–Keuls method (p < 0.05 corrected for family-wise error). To determine if there were differences on the HHIA questionnaire, a one-way analysis of variance where hearing status (NH/nNH, HI, and HA) was a between-subjects variable and the total score on the HHIA was the dependent variable. Post hoc testing was conducted using the Student–Newman–Keuls method (p < 0.05 corrected for family-wise error). All statistical analyses were conducted using IBM SPSS statistics software (version 24).
The first objective of study 2 was to assess and evaluate the 16-item version of the EMO-CHeQ for groups of participants with normal or near-normal hearing, hearing impairment and who did not wear hearing aids, and individuals with hearing impairment who wear hearing aids. A main effect of hearing status [F(2, 29) = 4.06, p < 0.05] was observed. Post hoc testing revealed that although no difference was observed between the HI (M = 2.7, SD = 0.7) and HA (M = 2.6, SD = 1.1) groups, both groups reported significantly more total handicap than the NH/nNH (M = 1.8, SD = 0.6) group (p < 0.05; see Fig. 4). For the subscale assessing the influence of talker characteristics on reported handicap, a main effect of hearing status [F(2, 29) = 3.45, p < 0.05] was observed. Post hoc testing revealed that although no difference was observed between the HI (M = 2.2, SD = 0.6) and HA (M = 2.3, SD = 0.9) groups, both groups reported significantly more total handicap than the NH/nNH (M = 1.5, SD = 0.6) group (p < 0.05). For the subscale assessing the influence of situational factors on reported handicap, a main effect of hearing status [F(2, 29) = 4.17, p < 0.05] was observed. Post hoc testing revealed that the HI group (M = 4.3, SD = 0.8) reported significantly more handicap than the NH/nNH group (M = 2.9, SD = 1.3; p < 0.05). For the subscale assessing socioemotional well-being, a main effect of hearing status [F(2, 29) = 3.55, p < 0.05] was observed. Post hoc testing revealed that although no difference was observed between the HI group (M = 2.8, SD = 1.4) and HA group (M = 2.7, SD = 1.3), both groups reported significantly more handicap than the NH/nNH group (M = 1.6, SD = 0.7; p < 0.05). Significant differences were not observed on the subscale assessing speech production.
Overall, it appears that the EMO-CHeQ shows good internal reliability as indicated by high Cronbach’s alpha values. Cronbach’s α = 0.83 for the talker characteristics subscale, α = 0.80 for the speech production subscale, α = 0.88 for the situational factors subscale, and α = 0.94 for the socio-emotional well-being subscale.
The second objective of study 2 was to determine if groups with different reported handicap on the EMO-CHeQ demonstrate significantly different patterns of performance on an objective test assessing emotion-identification performance. On the emotion-identification task in the audio-only condition, a main effect of hearing status [F(2, 29) = 4.16, p < 0.05] was observed. Post hoc testing revealed that although no difference was observed between the HI (M = 0.59, SD = 0.09) and HA (M = 0.60, SD = 0.06) groups, both groups performed significantly worse than the NH/nNH (M = 0.67, SD = 0.07) group (p < 0.05; see Fig. 5). Significant differences were not observed in the audiovisual condition.
For each of the three subgroups (NH/nNH, HI, and HA), Pearson bivariate correlations were calculated between the EMO-CHeQ (and subscales) and performance on the emotion-identification task (audio-only condition), binaural four-frequency pure-tone hearing thresholds (B4PTA), and the HHIA (see Table 6). Notably, for the HI group, the EMO-CHeQ (total) was a strong significant predictor of emotion-identification performance. Furthermore, for the HA group, B4PTA was a strong predictor of emotion-identification performance. For the sample as a whole, we observed a correlation of r = 0.62, p < 0.01 between the EMO-CHeQ and the HHIA.
Finally, we calculated how well scores on the HHIA discerns groups of participants with normal or near-normal hearing, hearing impairment and who did not wear hearing aids, and individuals with hearing impairment who wear hearing aids. A main effect of hearing status [F(2, 29) = 9.57, p < 0.01] was observed. Post hoc testing revealed that the difference between the NH/nNH (M = 9.5, SD = 8.7) and HI (M = 26.4, SD = 18.8) groups was almost significantly different (p = 0.06) and that both groups reported significantly less hearing handicap than the HA (M = 47.2, SD = 29.4) group (p < 0.05).
The main objective of this research was to develop a self-report questionnaire that assesses handicap in situations where a listener wishes to understand vocal emotion information present in the environment. To this end, the EMO-CHeQ questionnaire was developed by (1) reviewing the extant literature, (2) conducting a discussion group with three individuals with hearing loss (and one spouse) to better understand emotion communication from the perspective of end-users, (3) evaluating proposed questionnaire items with nine content experts, (4) distributing a 42-item version of the questionnaire to 586 individuals with self-reported normal hearing, unaided hearing loss, or aided hearing, (5) conducting an EFA to better understand the questionnaire’s factor structure and develop a shorter questionnaire, and (6) evaluating the 16-item version of the EMO-CHeQ in groups of individuals with audiometrically verified hearing to determine if the EMO-CHeQ can distinguish subgroups of participants in terms of self-reported emotion communication hearing handicap and whether such groups of subjects performed differently on an objective test of vocal emotion identification in aided and unaided listening conditions. Broadly, we found that (1) a questionnaire investigating vocal emotion communication was not available in the literature, (2) focus group participants indicated that vocal communication is relevant in a variety of listening contexts, (3) a group of individuals with self-reported normal hearing reported less vocal communication handicap than groups with self-reported hearing loss but that aided listeners reported similar levels of vocal communication handicap than unaided listeners, thus suggesting that modern hearing aids may not adequately address emotion communication handicap, (4) four factors seem to underlie the EMO-CHeQ questionnaire (talker characteristics, speech production, listening in complex situations, and socio-emotional well-being), (5) a 16-item version of the EMO-CHeQ was able to differentiate vocal communication handicap in a group of individuals with audiometrically verified normal hearing or near normal hearing from groups of individuals with hearing loss, but that groups of aided and unaided listeners reported similar degrees of handicap, again suggesting that modern hearing aids may not adequately address emotion communication handicap, (6) to the extent that performance on an objective task of auditory emotion identification represents a gold-standard task of vocal emotion communication, the EMO-CHeQ demonstrates good criterion validity and good internal reliability as evidenced by Cronbach alpha subscale scores, and (7) a group of individuals with audiometrically normal or near normal hearing thresholds between 500 and 3000 Hz outperformed groups of individuals with impaired hearing or hearing aids on an emotion-identification task; emotion-identification performance was similar between groups of unaided and aided listeners, a pattern identical to that observed on the EMO-CHeQ in both studies 1 and 2. The presence of behavioral deficits on the emotion-identification task that corresponds to the pattern of scores on the EMO-CHeQ suggests that the EMO-CHeQ is an ecologically valid measure of self-reported vocal emotion communication handicap. More broadly, the results from both studies suggest that the EMO-CHeQ is a valid measure to rapidly assess experiences of hearing and handicap in the context of vocal emotion communication.
The finding that self-reported use of hearing aids (study 1) or electroacoustically verified use of hearing aids (study 2) did not decrease scores on the EMO-CHeQ and that use of hearing aids did not improve emotion-identification performance (study 2) is noteworthy. This pattern of findings suggests that either the EMO-CHeQ is not sufficiently sensitive to detect benefit from hearing aids or that, currently, modern hearing aids do not adequately address vocal emotion deficits associated with hearing loss. These findings observed with adult participants is consistent with past research observing that children with hearing loss fitted with either cochlear implants (Hopyan-Misakyan et al. 2009 ; Most & Aviner 2009 ; Chatterjee et al. 2015) or hearing aids (Most & Aviner 2009) exhibit significantly worse emotion-identification abilities compared with peers with normal hearing. Two points are worth mentioning. First, it is currently unknown whether deficits with vocal emotion communication are amenable to hearing instruments. Future research should elucidate mechanisms that underpin vocal emotion communication deficits observed in individuals with hearing loss because such findings could inform research on hearing rehabilitation. To this end, we observed that scores on the EMO-CHeQ did not correlate with hearing thresholds in study 2. The failure to observe a significant correlation between EMO-CHeQ scores and hearing thresholds may have arisen because either the study was insufficiently powered to detect this relationship or that a mechanism (or mechanisms) other than hearing thresholds contribute to vocal emotion communication deficits. Second, if we do assume that hearing rehabilitation (e.g., hearing aids, training, etc) can potentially ameliorate vocal emotion communication deficits, the EMO-CHeQ might provide a brief and straightforward method to assess vocal emotion communication deficits and gauge treatment intervention efficacy. Accordingly, we suggest additional study of the EMO-CHeQ for this purpose.
Potential avenues for rehabilitating vocal emotional communication deficits are both technological and training based. In regards to technology, it seems likely that more linear processing strategies may ameliorate some of the emotion processing deficits observed in hearing aided listeners. Similar approaches have proved useful for supporting music processing (e.g., van Buuren et al. 1999 ; Arehart et al. 2011 ; Croghan et al. 2014 ; Kirchberger & Russo 2016). In regards to training, some research in cochlear implant users suggests that music training can lead to improvements in emotion identification (Peterson et al. 2012 ; Good et al. 2017). To the best of our knowledge, similar research has not yet been conducted in older adults with hearing loss. The EMO-CHeQ could be used as an appropriate outcome measure in future to assess the efficacy of such interventions.
Significant correlations were observed with performance on the emotion-identification task in the audio-only condition. Most notably, for the HI group, the EMO-CHeQ was a significant and strong predictor (r = −0.64) of emotion-identification performance such that participants with more reported handicap on the EMO-CHeQ performed more poorly on the behavioral task. For the HA group, the EMO-CHeQ was not significantly correlated with emotion-identification performance. It could be that the study was underpowered and that the sample size employed in study 2 in the HA group (n = 10) was insufficient to detect a significant correlation between the EMO-CHeQ and the emotion-identification task for this group. A significant and high correlation (r = −0.73) between emotion-identification performance with audiometric (i.e., B4PTA) thresholds was observed such that individuals with worse hearing performed more poorly on the emotion-identification task. Although speculative at this point, it could be that individuals with poorer hearing wore hearing instruments with greater compression compared with individuals with better hearing and that more compression potentially has a deleterious effect on emotion-identification performance.
An important discussion point concerns whether the EMO-CHeQ provides information above and beyond that provided by the HHIA questionnaires. As evidenced by the significant correlations (r = 0.66 in study 1 and r = 0.62 in study 2), there is evidence to suggest that there is overlap between the two questionnaires. However, there is reason to suspect that the EMO-CHeQ is conceptually different and provides insights other than those provided by the HHIA, thus providing evidence of discriminant validity. First, the EMO-CHeQ was specifically designed to address hearing handicap related to vocal emotion communication, whereas the HHIA is a more general measure of hearing handicap. Second, although the HHIA failed to discriminate the NH/nNH and HI groups, the EMO-CHeQ was able to differentiate between them. Third, the EMO-CHeQ was strongly correlated with emotion-identification performance in the HI condition and the HHIA approached significance in the NH/nNH condition. With respect to auditory emotion hearing handicap, this pattern suggests that the two measures may be better suited to discriminating different populations of individuals. Moving forward, it is recommended that research on vocal emotion and hearing loss considers using both the EMO-CHeQ and HHIA questionnaire.
Focus groups may be either exploratory or confirmatory. Although exploratory focus groups assess perceived problems and identify areas for additional investigation, confirmatory focus groups assess existing solutions. Because a formal thematic analysis of the discussion with participants was not conducted at the outset of this work, one possible avenue for future research would be to conduct a confirmatory focus group to help establish the validity of the EMO-CHeQ questionnaire.
One of the limitations associated with this work concerns the lack of objective hearing testing of the participants in study 1. There is, however, reason to suspect that this flaw does not meaningfully impact the results of the study. Specifically, participants completed a self-reported hearing screening measure (HHIA-S). It was observed that 15.6% of the NH group failed and 5.1% of the HI group did not fail the HHIA-S screening measure (scores of 10 or greater, as suggested by Newman et al. 1990), respectively. Analyses were conducted both including and excluding these subgroups. In both cases, retaining or dropping these subsets of individuals did not meaningfully affect the broader pattern of results. By including all individuals, we observed mean EMO-CHeQ scores of NH = 2.3 (SD = 0.8), HI = 3.4 (SD = 0.7), and HA = 3.3 (SD = 0.7). If we had excluded NH individuals who had failed the HHIA-S screening and excluded HI and HA individuals who did not fail the HHIA-S screening, we would have observed mean EMO-CHeQ scores of NH = 2.2 (SD = 0.7), HI = 3.5 (SD = 0.6), and HA = 3.4 (SD = 0.7). All differences were ≤0.1 points on the EMO-CHeQ.
To establish the existence of group differences on a self-report measure, it is important to demonstrate evidence of measurement invariance, but we leave it to future research to address this limitation. One approach (Sousa et al. 2012) could involve investigation of the underlying factor structure across the three groups (NH, HI, and HA) of interest to assess if a similar factor structure is observed across the three groups of participants. Our rationale for not reporting an EFA for each group separately is that conducting an EFA would have resulted in participant-to-variable ratios less than that recommended by Tabachnick and Fidell (2001).
One aspect tangential to the purpose of this research but which could inform hearing research more broadly concerns the value associated with conducting research via the internet. In the current study, identical patterns of results were observed between study 1 (which was conducted via the internet) and study 2 (a laboratory-based study) in terms of the between-group scores on the EMO-CHeQ (HI and HA groups reporting similar vocal communication handicap and both groups reporting more handicap than the NH/nNH group). Furthermore, correlations between the EMO-CHeQ and HHIA/HHIA-S questionnaire were essentially identical between the two studies. Although it is beyond the scope of this work to adequately discuss the costs and benefits of conducting internet-based hearing research, the current research suggests that internet-based research methods can yield patterns of performance similar to that observed in laboratory-based research (see also Singh et al. 2015).
The authors have no conflicts of interest to disclose.
* The sole exception is item Q13 of the Speech, Spatial, and Qualities of Hearing scale (Gatehouse and Noble 2004): “Can you easily judge another person’s mood from the sound of their voice?”.
† Participants in study 2 completed a 17-item version of the SSQ. The additional item was excluded from all analyses in study 2.
Arehart K. H., Kates J. M., Anderson M. C. Effects of noise, nonlinear processing, and linear filtering on perceived music quality. Int J Audiol, 2011). 50, 177–190.
Barker A. B., Leighton P., Ferguson M. A. Coping together with hearing loss
: A qualitative meta-synthesis of the psychosocial experiences of people with hearing loss
and their communication partners. Int J Audiol, 2017). 56, 297–305.
Cacciatore F., Napoli C., Abete P., et al. Quality of life determinants and hearing
function in an elderly population: Osservatorio Geriatrico Campano Study Group. Gerontology, 1999). 45, 323–328.
Chatterjee M., Zion D. J., Deroche M. L., et al. Voice emotion
recognition by cochlear-implanted children and their normally-hearing
peers. Hear Res, 2015). 322, 151–162.
Ciorba A., Bianchini C., Pelucchi S., et al. The impact of hearing loss
on the quality of life of elderly adults. Clin Interv Aging, 2012). 7, 159–163.
Cox R. M., Alexander G. C. Measuring satisfaction with amplification in daily life: The SADL scale. Ear Hear, 1999). 20, 306–320.
Croghan N. B., Arehart K. H., Kates J. M. Music preferences with hearing aids
: effects of signal properties, compression settings, and listener characteristics. Ear Hear, 2014). 35, e170–e184.
Dalton D. S., Cruickshanks K. J., Klein B. E., et al. The impact of hearing loss
on quality of life in older adults. Gerontologist, 2003). 43, 661–668.
Davis A., Smith P., Ferguson M., et al. Acceptability, benefit and costs of early screening for hearing
disability: A study of potential screening tests and models. Health Technol Assess, 2007). 11, 1–294.
Davitz J. R. The Communication of Emotional Meaning. 1964). Oxford, England: Mcgraw Hill.
DeVellis R. F. Scale Development: Theory and Applications (Applied Social Research Methods Series, Vol. 26). 1991). Newbury Park: Sage.
Dupuis K., Pichora-Fuller M. K. Intelligibility of emotional speech in younger and older adults. Ear Hear, 2014). 35, 695–707.
Dyck M. J., Farrugia C., Shochet I. M., et al. Emotion
recognition/understanding ability in hearing
or vision-impaired children: Do sounds, sights, or words make the difference? J Child Psychol Psychiatry, 2004). 45, 789–800.
Ekkekakis P. The Measurement of Affect, Mood, and Emotion
: A Guide for Health-Behavioral Research. 2013). Cambridge University Press.
Field A. Discovering Statistics Using SPSS (2005). 2nd ed.). London: Sage.
Field A. Discovering statistics using SPSS (2009). 3rd ed.). London, United Kingdom: Sage Publications.
Fleiss J. L. Fleiss J. L. Chapter 1: Reliability of measurement. In The Design and Analysis of Clinical Experiments (pp. 1–45). 1986). New York, NY: John Wiley & Sons.
Garstecki D. C., Erler S. F. Older adult performance on the communication profile for the hearing
impaired. J Speech Hear Res, 1996). 39, 28–42.
Gatehouse S., Noble W. The Speech, spatial and qualities of hearing
scale (SSQ). Int J Audiol, 2004). 43, 85–99.
Good A., Gordon K. A., Papsin B. C., et al. Benefits of music training for perception of emotional speech prosody in deaf children with cochlear implants. Ear Hear, 2017). 38, 455–464.
Hétu R., Jones L., Getty L. The impact of acquired hearing
impairment on intimate relationships: Implications for rehabilitation. Audiology, 1993). 32, 363–381.
Hintermair M. Hearing
impairment, social networks, and coping: The need for families with hearing
-impaired children to relate to other parents and to hearing
-impaired adults. Am Ann Deaf, 2000). 145, 41–53.
Holube I; EHIMA-ISMADHA Working Group. (Short description of the international speech test signal (ISTS). Center of Competence HörTech and Institute of Hearing
Technology and Audiology, 2006). Oldenburg, Germany.
Hopyan-Misakyan T. M., Gordon K. A., Dennis M., et al. Recognition of affective speech prosody and facial affect in deaf children with unilateral right cochlear implants. Child Neuropsychol, 2009). 15, 136–146.
Kaiser H. F. The application of electronic computers to factor analysis. Educ Psychol Meas, 1960). 20, 141–151.
Keidser G., Dillon H., Flax M., et al. The NAL-NL2 Prescription Procedure. Audiol Res, 2011). 1, e24.
Kirchberger M., Russo F. A. Dynamic range across music genres and the perception of dynamic compression in hearing
-impaired listeners. Trends Hear, 2016). 20, 2331216516630549.
Kurtzer-White E., Luterman D. Families and children with hearing loss
: Grief and coping. Ment Retard Dev Disabil Res Rev, 2003). 9, 232–235.
Livingstone S. R., Russo F. A. The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS ONE 2018). 13(5): e0196391.
Lynn M. R. Determination and quantification of content validity. Nurs Res, 1986). 35, 382–385.
Moses K. Mulick J. A., Pueschel S. M. The impact of initial diagnosis: Mobilizing family resources. In Parents-Professional Partnerships in Developmental Disability Services (pp. 1983. Cambridge: Academic Guild Publishers.11–34).
Most T., Aviner C. Auditory, visual, and auditory-visual perception of emotions by individuals with cochlear implants, hearing AIDS
, and normal hearing
. J Deaf Stud Deaf Educ, 2009). 14, 449–464.
Most T., Michaelis H. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss
versus children with normal hearing
. J Speech Lang Hear Res, 2012). 55, 1148–1162.
Naramura H., Nakanishi N., Tatara K., et al. Physical and mental correlates of hearing
impairment in the elderly in Japan. Audiology, 1999). 38, 24–29.
Nasreddine Z. S., Phillips N. A., Bédirian V., et al. The Montreal Cognitive Assessment, MoCA: A brief screening tool for mild cognitive impairment. J Am Geriatr Soc, 2005). 53, 695–699.
Newman C. W., Weinstein B. E., Jacobson G. P., et al. The Hearing Handicap
Inventory for Adults: Psychometric adequacy and audiometric correlates. Ear Hear, 1990). 11, 430–433.
Newman C. W., Weinstein B. E., Jacobson G. P., et al. Test-retest reliability of the hearing handicap
inventory for adults. Ear Hear, 1991). 12, 355–357.
Norman G. Likert scales, levels of measurement and the “laws” of statistics. Adv Health Sci Educ Theory Pract, 2010). 15, 625–632.
Peterson B., Mortensen M. V., Hansen M., et al. Singing in the key of life: A study on effects of musical ear training after cochlear implantation. Psychomusicology, 2012). 22, 134–151.
Picou E. M. How hearing loss
and age affect emotional responses to nonspeech sounds. J Speech Lang Hear Res, 2016). 59, 1233–1246.
Polit D., Hungler B. Nursing Research: Principles and Methods (1999). 6th ed.). Philadelphia: Lippincott.
Polonenko M. J., Scollie S. D., Moodie S., et al. Fit to targets, preferred listening levels, and self-reported outcomes for the DSL v5.0 a hearing
aid prescription for adults. Int J Audiol, 2010). 49, 550–560.
Preminger J. E., Meeks S. Evaluation of an audiological rehabilitation program for spouses of people with hearing loss
. J Am Acad Audiol, 2010). 21, 315–328.
Scarinci N., Worrall L., Hickson L. The effect of hearing
impairment in older people on the spouse: Development and psychometric testing of the significant other scale for hearing
disability (SOS-HEAR). Int J Audiol, 2009). 48, 671–683.
Scherer M. J., Frisina D. R. Characteristics associated with marginal hearing loss
and subjective well-being among a sample of older adults. J Rehabil Res Dev, 1998). 35, 420–426.
Singh G., Lau S. T., Pichora-Fuller M. K. Social support predicts hearing
aid satisfaction. Ear Hear, 2015). 36, 664–676.
Smith S. L., Pichora-Fuller K. M., Watts K. L., et al. Development of the listening self-efficacy questionnaire (LSEQ). Int J Audiol, 2011). 50, 417–425.
Sousa K. H., West S. G., Moser S. E., et al. Establishing measurement invariance: English and Spanish Paediatric Asthma Quality of Life Questionnaire. Nurs Res, 2012). 61, 171–180.
Stephens D., France L., Lormore K. Effects of hearing
impairment on the patient’s family and friends. Acta Otolaryngol, 1995). 115, 165–167.
Stevens J. P. Applied multivariate statistics for the social sciences (2002). 4th ed.). Hillsdale, NS: Erlbaum.
Streiner D. L., Norman G. R. Health Measurement Scales: A Practical Guide to their Development and Use. 2. 1995). New York: Oxford University Press.
Tabachnick B. G., Fidell L. S. Using Multivariate Statistics. 2001). Boston: Allyn and Bacon.
Tambs K. Moderate effects of hearing loss
on mental health and subjective well-being: Results from the Nord-Trøndelag Hearing Loss
Study. Psychosom Med, 2004). 66, 776–782.
van Buuren R. A., Festen J. M., Houtgast T. Compression and expansion of the temporal envelope: Evaluation of speech intelligibility and sound quality. J Acoust Soc Am, 1999). 105, 2903–2913.
Ventry I. M., Weinstein B. E. The hearing handicap
inventory for the elderly: A new tool. Ear Hear, 1982). 3, 128–134.
Wallhagen M. I., Strawbridge W. J., Shema S. J., et al. Impact of self-assessed hearing loss
on a spouse: A longitudinal analysis of couples. J Gerontol B Psychol Sci Soc Sci, 2004). 59, S190–S196.
World Health Organization. (Atlas: Mental Health Resources in the World 2001. 2001). Geneva: World Health Organization.
World Health Organization. (The Global Burden of Disease: 2004 Update. 2008). Geneva, World Health Organization.
Yoshinaga-Itano C., Abdala de Uzcategui C. Kurtzer-White E., Luterman D. Early identification and social emotional factors of children with hearing loss
and children screened for hearing loss
. In: Early Childhood Deafness (pp. 2001). Baltimore, MD: York Press.13–28).
Emotion; Hearing; Hearing aids; Hearing handicap; Hearing loss
Supplemental Digital Content
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.