Secondary Logo

Journal Logo

HIGHLIGHTS FROM THE ACIA 15TH SYMPOSIUM ON COCHLEAR IMPLANTS IN CHILDREN IN SAN FRANCISCO

Training of Speech Perception in Noise in Pre-Lingual Hearing Impaired Adults With Cochlear Implants Compared With Normal Hearing Adults

Bugannim, Yossi; Roth, Daphne Ari-Even; Zechoval, Doreen; Kishon-Rabin, Liat

Author Information
doi: 10.1097/MAO.0000000000002128

Abstract

Despite major advances in cochlear implant (CI) technology, speech perception in background noise remains a significant challenge for CI users in everyday life, especially when the noise and the target are not spatially separated (1). In such cases, CI users have shown to require a signal-to-noise ratio (SNR) that is 10 to 25 dB higher than that of their hearing peers to achieve similar performance (2,3), placing them at a significant disadvantage in many real-life listening situations (4). This inability of the CI user to process effectively speech in noise is probably related to a combination of reduced spectral information (due to the limited number of spectral channels) and increased modulation interference (due to their susceptibility to modulated masking noise) (5). Therefore, much effort has been devoted to improving speech perception in noise for CI users, primarily via technological means that are aimed at enhancing the speech signal at the periphery (2,3,5–11). These approaches resulted in limited benefit supporting the notion that the central auditory system needs to be actively involved to compensate for the impoverished cues that the CI provides (10). It has been suggested that training the auditory system to attend to speech stimuli in noise by invoking cognitive demands (e.g., auditory working memory and attention) may improve sensory acuity at lower processing levels (i.e., “top-down” processing), thus complementing “bottom-up” processing provided by the CI device (12–14). This rationale is based on findings that central auditory structures are involved when perceiving speech in difficult listening conditions (14–16) and that a period of training is required to couple the signals from the implant to the central auditory system for optimizing the benefit from the implants (17).

Relatively few studies have tested the improvement of speech perception in noise following auditory training in CI users, using different training material and training protocols. In a systematic review aimed to assess the efficacy of speech training using computer-based auditory training (CBAT) for adults with hearing loss (between 1986 and 2013), only 13 met the inclusion criteria, seven studied trained adults with CI, and only five of them trained in noise (12). Moreover, most of the studies trained postlingually deafened adults using vowels, consonants, syllables, and/or words (4,10,13,18–20). Studies also differed in the type of noise used (e.g., continuous (13) and/or multi-talker babble (4,13)) and in the amount of training (e.g., 5 days (10) to 4 weeks (4,13)). There were also a few studies that trained in quiet, but assessed generalization of learning to untrained speech materials in noise (21,17). Only few of the aforementioned studies showed post-training improvements.

Possible factors that contributed to the small and inconsistent benefit from training in CI users include lack of homogeneity in training protocols and training conditions that may require different cognitive demands, a variety of outcome measures (“bottom-up” sensory refinement versus “top-down” sentence recognition), and a wide age range of the participants (46–78 years old (13); 50–85 yrs (10)) (12). It is also possible that the postlingually deafened CI adults that were included in the mentioned studies may have had difficulty in adapting to the novel stimuli transmitted by the CI device due to existing acoustic-linguistic cortical map. Thus, training prelingually deafened young adults who had no or minimal acoustic hearing before cochlear implantation and comparing their performance to normal hearing (NH) young adults under similar training conditions may provide new insight to the factors influencing the benefit of auditory training in noise. Therefore, the purpose of the present study was to assess the effect of training on a speech perception in noise task in young CI adult users with prelingual hearing loss, who had years of experience with their implant, as compared with normal-hearing young adults, following single- and multi-session training using the same training protocol.

METHODS

Participants

Twenty-two CI adult users (M age = 26.0 ± 7.3 yrs), and 30 adults with NH (M age = 24.4 ± 1.8 yrs) participated in the present study. Twelve of the CI users had their first CI device before the age of 6 years (range, 2–6 yrs, M = 3.5 ± 1.91 yrs, termed “early” implanted) and 10 had their first implant after the age of 13 years (range, 13–44 yrs, M = 22.80 ± 10.07 yrs, termed “late” implanted). Individual background information for the CI users is shown in Table 1. Of these, seven CI (M age = 29.4 ± 9.3 yrs), and six NH (M age = 26.0 ± 1.8 yrs) continued to multi-session training. Those are indicated with a (*) in Table 1. All used spoken language as their primary mode of communication. Participants were recruited via the internet and social networks and were paid for their participation. The NH adults had pure-tone thresholds less than or equal to 20-dB HL bilaterally, at 0.25 to 4 kHz. The study was approved by the Institutional Review Board of ethics at Tel Aviv University.

TABLE 1
TABLE 1:
Background information of the CI participants including age of hearing loss identification, etiology of hearing loss, age at implantation of first CI device, duration of use of first CI device, the hearing apparatus (bilateral CI, unilateral CI, or bimodal CI+HA), the ear which was implanted with the CI device and CI manufacturer

Training Material—Hebrew Matrix test

Training was conducted using the Hebrew version of the Matrix sentence in noise test, which uses an adaptive procedure for speech reception threshold in noise (SRTn), i.e., the SNR at which 50% of the words are correctly repeated (22). The test is described in Appendix A.

Cognitive Assessment

For all the participants, two cognitive capabilities were assessed, one specific to the auditory modality and one reflecting a more general domain attention capability, both suggested to be involved with listening in noise tasks (e.g., 34). Specifically, auditory working memory was assessed using the backward digit span subtest of the “Wechsler intelligence scale” (23) where participants heard sequences of numbers (e.g., 2, 5, 4, and 1) and were asked to repeat them in the reverse order. The passing criterion to the next longer sequence was two successful repetitions of sequences of similar length. Visual attention and task switching abilities were assessed using the “Trail making test” part A, where participants were required to connect between numbers on a sheet of paper as fast as they can (24). Non-verbal intelligence of the CI users was assessed using the “Raven's standard Progressive Matrices” test (25).

Apparatus

All testing took place in a sound-treated room. Stimuli were delivered using a laptop personal computer through an external sound-card and a loudspeaker that was located at 1.5 m in front of the participant (S0N0). Bilateral CI users were tested wearing both CIs, whereas bimodal listeners were tested only with their CI device (hearing aid turned off). NH were tested monaurally via Sennheiser HDA-200 headphones, 16 in the right ear and 14 in the left ear.

Hebrew AB Word Lists in Noise

Generalization of training-induced gains to untrained speech material was assessed for the CI group using prerecorded Hebrew AB word lists (26,27). Similar to the English version of the test (27), the Hebrew AB test consists of 15 isophonemic word lists. Each list contains 10 consonant-vowel-consonant monosyllabic meaningful words (26). Because Hebrew has only five vowels, every list contains each vowel twice and each consonant once. Two Hebrew AB (HAB) word lists were presented in white noise at a SNR of +3 dB pre and post multi-session training.

Experiment Design

All the participants took part in a single training session. To familiarize the participants with the task, each participant first listened to two lists from the Matrix test (20 sentences/list), one in quiet and one in a fixed SNR previous (28). Training in the single session included four SRTn measurements, which allowed following within session improvement. Cognitive tests were administered once to all participants at the end of the first training session.

Seven CI users and six adults with NH continued training for four additional sessions, totaling five training days spaced 2 to 3 days apart. In each session, six SRTn were obtained, which allowed following within session improvement. The CI users continued to train for additional 5 days. Six SRTn were collected each day. Overall, this subgroup of CI users listened to 1200 sentences at varying levels of SNR over 10 training days. At the end of the last training day, generalization was assessed using two unfamiliar word lists of the HAB in noise.

Retention of benefit of training was measured 3 months after the end of training for six of the CI users who received 10 days of training. Four SRTn measurements of the Hebrew Matrix were obtained at the retention session.

Tables A and B in supplement material (http://links.lww.com/MAO/A729) show the data in the order by which they were collected for the single and multi-session training, respectively.

Data Analysis

Normal distribution of the data was confirmed using the Kolmogorov–Smirnov test allowing for parametric statistics. Because the two subgroups of CI users (early and late implanted) did not differ in their background data, HAB performance post implantation, as well as in their SRTn in the first four measurements, further analysis was conducted on the combined data of these groups. Results of the comparison between the early and late implanted subgroups of CI are included in Appendix B. Statistical analysis was conducted using SPSS software (IBM SPSS Statistics, Armonk, NY). Significance was set at 0.05.

RESULTS

Single-Session Training

The SRTn for all participants in the single-session training are shown in Figure 1, as well as the mean group data. NH participants reached SRTn that are better than those of the CI by an average of 9.45 dB SNR (based on the average of measurements 3 and 4). It can be seen that there is a large variation in the SRTn of the CI users (range from −3.7 to +14 dB SNR) compared with SRTn of the NH group (range from −10.1 to −6.3 dB SNR).

FIG. 1
FIG. 1:
Individual SRTn (thin lines) and mean group data (thick lines) for all CI and NH participants in the single-session training. Each thin line and symbol represents an individual participant. CI indicates cochlear implant; NH, normal-hearing; SRTn, speech reception thresholds in noise.

A two-way repeated measures analysis of variance (ANOVA) was conducted with measurement (1–4) as the within-subject variable and group (NH, CI) as the between-subject variable. The results revealed a significant effect of group (F [1, 50] = 159.0, p < 0.001, η2 = 0.8), with the NH showing better thresholds (mean SRTn = −8.1 ± 0.5 dB SNR) compared with the CI (mean SRTn = 1.3 ± 0.6 dB SNR). Also, the analysis showed a significant effect of measurement (F [3,150] = 4.1, p < 0.005, η2 = 0.1) with a significant linear effect (p = 0.001).

The two groups of participants differed in the TRAIL A scores (t [50] = −1.8, p < 0.05) with the CI users scoring slower (M = 20.5 ± 5.6 s) than the NH group (M = 17.7 ± 5.1 s). They also differed in working memory capabilities with smaller (worse) backward digit span for the CI compared with NH (M = 4.5 ± 1.3 and M = 5.1 ± 0.9, respectively, t [50] = 2.2, p < 0.01). No significant correlations were found between the cognitive tests and SRTn within each group (p > 0.05). For the CI group, Pearson correlation revealed a significant negative association (r [19] = −0.6, p = 0.0017) between the first two SRTn and HAB monosyllabic word recognition in quiet (shown in Table 1). A similar significance that was found between HAB in quiet and the last two SRTn measurements (r [19] = −0.6, p = 0.0036) suggesting that better speech recognition scores in quiet (before training) are associated with better (i.e., lower) SRTn in a single session of testing.

Five-Day Training CI Versus NH

The mean SRTn (±SE) for each group of participants during the 5-day training is shown in Figure 2. Also shown is the quadric and linear change in performance across the six measurements within each day of training, linear regression functions across the 5 days of training (30 measurements) for NH and CI groups, suggesting that the number of measurement can explain 94 and 83% of SRTn performance, respectively. The disadvantage in SRTn that the CI showed compared with NH in the single-session training was maintained during the additional 4 days of training. A three-way repeated measures ANOVA was conducted with Days (1–5) and Measurement (1–6) as the within-subject variables and Group (NH, CI) as the between-subject variable. Results confirmed a significant Group effect (F [1,11] = 59.1, p < 0.0001, η2 = 0.8), with the NH showing better thresholds (mean SRTn = −9.3 ± 1.0 dB SNR) compared with the CI (mean SRTn = 0.8 ± 0.9 dB SNR), a difference of 10.2 dB SNR. Effect of day was significant (F [4,44] = 9.6, p < 0.0001, η2 = 0.5) with a linear effect (p = 0.003) suggesting learning occurred throughout the training days. The effect of measurement was also found significant (F [5, 55] = 8.0, p < 0.005, η2 = 0.4) with linear and quadratic effects (p < 0.01) suggesting within-session improvement. No significant interactions were found for Day*Measurement (F [20,220] = 0.9, p = 0.5, η2 = 0.1) or Measurement * Group (F [5, 55] = 1.8, p = 0.2, η2 = 0.1).

FIG. 2
FIG. 2:
Group mean SRTn (±1 SE) for CI and NH participants during the 5-day training and for the CI during the additional 5-day training and at the retention session 3 months post-training. Also shown is the quadric and linear change in performance across the six measurements within each day of training, linear regression functions across the 5 days of training (30 measurements) for NH and CI groups and across the 10 days of training (60 measurements) for the CI group. CI indicates cochlear implant; NH, normal-hearing; SRTn, speech reception thresholds in noise.

For each participant, Pearson correlation was conducted to assess learning during the first 5 days of training, shown in Table 2. It can be seen that all six NH individuals showed significant improvement between the last two measurements of Day 1 to Day 5 (imp SRTn = 2.2 ± 0.2 dB SNR) compared with five CI users (with the exclusion of CI2 and CI4) users who showed SRTn improvement of 4.5 dB SNR (±3.1) over the 5 days of training (p < 0.01).

TABLE 2
TABLE 2:
Individual data for the CI and NH who trained 5 and 10 days

The two groups of participants differed in the TRAIL A scores (t [11] = −3.0, p < 0.01) with the CI users scoring slower (M = 23.4 ± 4.3 s) than the NH group (M = 17.0 ± 2.8 s) but not on the backward digit span. No significant correlations were found within the CI group except for a medium non-significant correlation coefficient (r [5] = 0.6, p = 0.2) was found between HAB word recognition in quiet and training-induced gains in the 5th day of the CI training.

10-Day Training for CI Only

The mean SRTn (±SE) for the CI users that continued to train an additional 5 days to a total of 10 days are shown in Figure 2. Also shown is the linear regression across the 10 days of training (60 measurements) suggesting that measurement can explain 84% of the variance of SRTn performance. A two-way repeated measures ANOVA was conducted to assess the effects of Day (1–10) and Measurement (1–6) for the CI users that continued to train for a total of 10 days. The results revealed significant effects of Day (F [9, 54] = 8.2, p < 0.01, η2 = 0.6) and Measurement (F [5,30] = 6.2, p < 0.05, η2 = 0.5) with no significant interactions. The results also revealed linear effects of Day (p = 0.005) and linear and quadratic effects of Measurement (p = 0.048, p < 0.0001, respectively) suggesting significant learning within each day and across the 10 days of training.

A separate Pearson correlation was conducted for each of the CI participants, shown in Table 2. It can be seen that participants CI2 and CI4 (the “non-learners” in the first 5 days of training), demonstrated significant learning over 10 days of training (p < 0.01). They improved by 1.9 dB SNR compared with an improvement of 5.0 dB SNR shown by the other CI users. Overall, the average improvement of SRTn of the CI group after 10 days of training was 4.1 dB SNR (±3.5). A trend for a linear association was found between the first two SRTn on the first day of training and HAB monosyllabic word recognition in quiet (r [5] = −0.8, p = 0.06).

Generalization of Training and Long-Term Retention

No improvement in the untrained HAB words in noise was found following training with the Matrix sentences-in-noise test (p > 0.05). Specifically, mean pre-training performance was 11.4% (±10.7) and 33.1% (±7.8) (for words and phonemes, respectively), compared with post-training performance of 7.4% (±9.5) and 29.0% (±7.2), respectively. This suggests that no transfer of learning (generalization) occurred from trained speech stimuli to untrained material despite the significant improvement shown for the trained speech stimuli.

Retention of the training-induced gains was assessed in six of the CI participants at 3 months post-training (Fig. 2). t tests revealed that the average SRTn of the two last measurements of the retention day were significantly higher (worse) than the last two measurements of training days 9 and 10 (t [5] = −2.1, p = 0.02, t [5] = −2.6, p = 0.014), respectively. Comparison of the results of the retention with measurements in Days 1–7 revealed that until Day 5 (including), retention performance tended to be better with significance levels (p) in the range of 0.07 to 0.09.

DISCUSSION

The present study demonstrated for the first time the effect of training speech perception in noise in CI users with prelingual hearing loss and the comparison with those of NH using the same speech materials and training protocol. Specifically, our data support the following findings: 1) CI users showed SRTn that are 9 to 10 dB higher than NH following a single session of training; 2) both groups showed an average improvement in SRTn of 0.5 dB SNR across the first four measurements in one training session; 3) after 5 days of training, five of seven CI users improved their SRTn by approximately twice as much as the NH; 4) after 5 more days of training, the two CI “non-learners” showed improvement and over all CI group improved by 4.1 dB SNR; 5) for the CI users, word recognition in quiet predicted SRTn on the first day of training and training-induced gains on the 5th day of training, and, 6) no generalization of learning to word recognition in noise was demonstrated and improvement was partially preserved at 3 months post-training.

One-Day Session

Our finding that CI users show SRTn that are 9 to 10 dB higher than NH when tested in a single session is within the 10 to 25 dB CI disadvantage reported by others (2,3). The fact that the CI users in the present study did not demonstrate a greater disadvantage may be related to the fact that the speech material used required less cognitive effort and less previous linguistic knowledge compared with more complex speech material in noise. It is also possible that the fact that our CI users were prelingually deafened with no previous cortical acoustic-linguistic representations may have allowed for better adaptation to the impoverished speech signal (29), thus leading to better performance on the Matrix test. Noteworthy is the fact that the CI users demonstrated a large range of inter-subject variability in SRTn scores (17.7 dB SNR) compared with the NH (3.8 dB SNR). Possible factors that may explain the variability in the performance of the CI users include amount of functional residual hearing until and after receiving the CI device, age at implant in relation to the functional hearing, and device-related input signal processing. Regarding the latter of the factors, studies have shown that narrow input dynamic range (IDR) such as in the Cochlear ESPrit 3G (Cochlear Corporation, Lane Cove, NSW, Australia) processor results in poor performance when input levels exceed 65 dB SPL because of the compression that takes place on the upper end of the IDR. This may have reduced any positive SNR that existed in the test condition (30). In contrast, sound processors that have wider IDR such as in Advanced Bionics, Med EL, and Cochlear (Freedom and Nucleus 5, 6) partially preserve the positive SNR. Similarly, differences in microphone technology may have influenced the ability of the device to preserve the SNR at the input of the device (30). While these device-related factors can have a significant impact on the performance of the CI user in noise, thus contributing to the between-subject variability at initial testing (see Table 1), it should not be a confounding factor in the present study because each CI user was using their same program throughout training and testing. Moreover, the noise was held constant at 65 dB SPL during all training and testing session.

To overcome task learning, both groups were initially exposed to 40 sentences before training as recommended in studies using the matrix test (31,28). During single-session training, both groups showing an improvement of 0.5 dB SNR across the first four measurements, similar to the improvement reported by Hey et al. (31) for CI users (0.3–0.5 dB SNR (31)), but less than reported in other languages (with normal hearing) (28). The NH were a homogenous group of young adults and it is possible that they showed the fast learning phase in the pre-training (i.e., in the first two lists before testing). In contrast, the CI users did not show improvement across the four measurements, which may be related to the large heterogeneity of this group, which may have influenced differently the listeners’ ability to learn the task.

Five-Day Training (NH versus CI Users)

A major outcome of the present study is that after 5 days of training, both NH and CI showed significant improvement across the days and within each day of training. The NH showed an average improvement in SRTn of 2.2 dB SNR across the 5-day training. Although the present study was not designed with a control group, we compared our data with unpublished data of 12 young adults with normal hearing, whose SRTn were measured four times in three sessions 5 weeks apart, with no training between sessions. The performance on the first day of this untrained group was similar to that of the NH adults in the present study (−8.1 and −8.0 dB SNR, respectively). The change in performance between the two testing sessions of the untrained group (5 weeks apart) was 0.3 dB SNR, considerably less than the improvement of 2.2 dB SNR over the 5 weeks of training demonstrated in the present study.

All NH participants and five of the seven CI users showed learning over the 5 days of training. When comparing the improvements between the two groups (taking into account only those who showed learning), the CI users improved their SRTn by 4.5 dB SNR and the NH by 2.2 dB SNR. This is in keeping with previous findings showing the greatest gains for those who started the poorest (31). The improvements demonstrated in the present study are significantly larger than the improvement of 1.6 dB defined by Sweetow and Sabes (32) as clinically significant.

10-Day Training (CI Users Only)

Training CI users with 5 more days improved their SRTn by an additional 1 to 2 dB SNR. In addition, the two CI participants who initially showed no significant learning demonstrated learning with continued training, suggesting that CI may differ in their time course of learning and that this should be considered in clinical applications of auditory training. Overall the CI group improved by an average of 4.1 dB SNR. Thus, the CI group succeeded in cutting in half the initial disadvantage they showed compared with the NH at the first day of testing. To our knowledge, this is the first to report on such significant improvements of CI users following training.

Although learning was evident across the days and within each day, the average first measurement of each training day was significantly higher (worse) than the average last measurement of the previous training day (M = 0.1 ± 1.5 and M = −0.7 ± 1.6, respectively; t [8] = 5.1, p = 0.0009). These findings suggest that the CI users required re-learning of the task, which was followed by further improvement in thresholds. Future studies that are aimed to assess learning of speech perception in noise should examine the characteristics of the course learning following training. Such information will have both theoretical and clinical implications (33).

Predicting Factors

The single predicting factor of the SRTn for the CI users was word recognition score in quiet. Word recognition was associated with performance on the first day and with improvement in SRTn on the fifth day of training. Not surprisingly, a basic requirement for listening in noise is accessibility to speech sounds and processing them correctly. In normal hearing listeners, accessibility to speech sounds is not an issue and therefore much of the variance explaining the differences between listeners in complex situations is attributed to top-down processing related to cognitive abilities and linguistic knowledge. Our finding that the NH adults showed better visual attention and better working memory compared with the CI users supports the notion that poor cognitive abilities are associated with poor speech perception in noise (e.g., (13,34,35)).

Generalization and Retention

There are several possible explanations to our finding of no generalization of learning to unfamiliar words in noise following training. One possibility may be related to the difficulty of the listening condition of the untrained speech stimuli as evident by the poor performance at base line (mean word recognition in noise of 11.4%). Support for this explanation can be found in several studies with postlingually deafened adults (e.g., 10, 12, 17). Schumann et al. (17), e.g., reported a significant improvement of 10 percentage points on untrained sentences in noise following training but only when the SNR was +5 dB (mean pre-training sentence score of approximately 60%). No improvement was observed for sentences in noise at 0 dB SNR (mean pre-training score of 20%). Similarly, Ingvalson et al. (10) reported improvement of key word recognition scores of untrained HINT sentences following training in postlingually deafened adults but only at favorable listening conditions. Thus, the extent of improvement of untrained stimuli following training may depend on baseline performance. It is also possible that the training in the present study, which targeted recognition at 50% on the psychometric function, was not sufficient for learning the necessary skills for listening in noise. Training at higher thresholds on the psychometric function, implying that more words in noise are heard, may result in better generalization of learning following training. A third possibility may be related to the differences in the training material which included sentences with very familiar words compared with the untrained speech material of monosyllabic words which some of them were unfamiliar to the participants. Thus, our data continue to support the notion that the conditions under which perceptual learning generalizes are not well understood, there is no simple rule that can be used to predict the pattern of generalization on a given task (33) and that further investigations are required. Noteworthy is the fact that at the end of the training protocol, our CI users filled an informal questionnaire regarding the training process and the benefit they thought they gained from it. All but one CI reported better ability and motivation to attend to the words in background noise and that they thought training in noise helped them to listen in difficult situations in real life. Other researchers reported similar impressions (e.g., (13,32)). In future studies, it would be of interest to associate the ability of listening in noise and the benefit from training with quality of life measures such as the one based on the World Health Organization-International Classification of Functioning (WHO-ICF) (36).

Our results of partial retention of the improvement following training may be evidence of the consolidation process that transfers the new skill to long-term memory (e.g., (37,38)). The finding of no full retention may be related to the amount of training, which may not have been sufficient, or to the protocol of training. Training conducted by assessing identification at 70% words correct may have resulted in a different course of learning (31) and possibly better retention of training-induced gains.

Limitations of the Study

One limitation of the present study is related to the fact that training was conducted with little variability in sentence structure, vocabulary, and talker. Training may be therefore very specific to the task. Another limitation may be related to the small and heterogeneous cohort of CI users who received multi-day training. Their different background variables may have interacted with the benefit shown from training.

Summary

The present study is the first to demonstrate the course of learning following speech perception in noise in CI users with prelingual hearing loss. Training with the Matrix test, CI users were able to reduce by 50% their disadvantage of listening in noise compared with NH. Notwithstanding the importance of the training, the outcomes of the present study also emphasize the importance of accessibility to speech sounds by the CI users for successful listening in noise. Future research may consider combining training sentences in noise with increasing cognitive demand (e.g., from recognition to comprehension: “top-down” processing) with CI devices that incorporate speech enhancement (or noise reduction) algorithms for improving “bottom-up” processing.

Acknowledgments

The authors wish to express their deep appreciation to Shiran Koiffman, Dr. Melanie Zokol, and Prof. Birger Kollmeier for assisting in the recording of the test stimuli, running the optimization procedure, providing the equipment and supporting them in many other ways. They are grateful to Haya Grinvald for the statistical analysis, and they would like to acknowledge the following undergraduate students at the Communication Disorders department for assisting with the data collection: Aliza Sarah-Levi, Raya Elizur, Yael Diriham, Reut Yoskovich, Natali Horshidi, and Liron Levi.

Appendix A: Hebrew Matrix Test

The Matrix test was developed in over 15 languages and with very similar discrimination functions across languages, and comparable results for normal hearing listeners across the languages (28). Importantly, the test was found well-suited for the assessment of speech recognition in noise by CI users (31). The Matrix test consists of sets of sentences that are syntactically identical but semantically unpredictable. The sentences have all the same grammatical structure (in Hebrew: Name-verb-number-noun-adjective) and employ a base list of 50 words (appropriate for 5-year olds), 10 words in each grammatical category. Theoretically, the 50 words can make up 100,000 sentences making it difficult to repeat the exact sentence twice.

Recording of a native Hebrew-speaking women speaker was conducted in the labs at Oldenberg University. To equate speech intelligibility across the individual words of all sentences, optimization and evaluation measurements were conducted according to the guidelines developed and described by Kollmeier et al. (28), and which were used for the development of the Matrix in the other languages. A detailed description of the preparation of the speech stimuli of the Hebrew Matrix test is described elsewhere (Kishon-Rabin et al., in preparation).

The noise was a steady-state speech shaped noise, which was generated by superimposing all synthesized sentences (28). The noise was presented at fixed level of 65 dB SPL. At first presentation, the sentence was presented at SNR = 0 dB. Based on the responses of the listener, the level of the sentences was varied (22). Specifically, correct word recognition of 1, 2, 3, 4, or 5 words resulted in the presentation of the next sentence at the following SNR levels (+4.5, +1.5, −1.5, −4.5, −7.5 dB, respectively). The step size decreased exponentially after each reversal of the presentation level. At the end, the SRTn was estimated using a maximum likelihood procedure (22). Each SRTn was based on 20 different sentences.

Appendix B

The differences in cognitive measures between the early implanted and late implanted CI using tested t test for independent means revealed no difference (p > 0.05) on postimplant HAB word recognition test in quiet HAB in quiet (M = 64.2 ± 21.8 and M = 54.4 ± 22.1), the TRAIL A (M = 19.7 ± 3.8 s and M = 21.6 ± 7.5, respectively) and on backward digit span (M = 4.6 ± 1.58 and M = 4.3 ± 1.06, respectively). The young implanted group showed slightly lower (worse) non-verbal intelligence scores compared with the late implanted group (M = 80.7 ± 11.3 and M = 89.0 ± 3.96, t [19] = −1.8, p = 0.045). Also, a two-way repeated measures ANOVA conducted with measurement (1–4) as the within-subject variable and subgroup (CI early implanted versus, CI late implanted) as the between-subject variable revealed no significant main effects of subgroup or Subgroup × Measurement interaction (p > 0.05).

REFERENCES

1. Litovsky RY, Gordon K. Bilateral cochlear implants in children: effects of auditory experience and deprivation on auditory perception. Hear Res 2016; 338:76–87.
2. Spriet A, Van Deun L, Eftaxiadis K, et al. Speech understanding in background noise with the two-microphone adaptive beamformer BEAM™ in the Nucleus Freedom™ cochlear implant system. Ear Hear 2007; 28:62–72.
3. Wouters J, Berghe JV. Speech recognition in noise for cochlear implantees with a two-microphone monaural adaptive noise reduction system. Ear Hear 2001; 22:420–430.
4. Fu QJ, Galvin JJ. Maximizing cochlear implant patients’ performance with advanced speech training procedures. Hear Res 2008; 242:198–208.
5. Goehring T, Bolner F, Monaghan JJ, et al. Speech enhancement based on neural networks improves speech intelligibility in noise for cochlear implant users. Hear Res 2017; 344:183–194.
6. Schafer EC, Thibodeau LM. Speech recognition abilities of adults using cochlear implants with FM systems. J Am Acad Audiol 2004; 15:678–691.
7. Anderson ES, Nelson DA, Kreft H, et al. Comparing spatial tuning curves, spectral ripple resolution, and speech perception in cochlear implant users. J Acoust Soc Am 2011; 130:364–375.
8. Ye H, Deng G, Mauger SJ, et al. A wavelet-based noise reduction algorithm and its clinical evaluation in cochlear implants. PLoS One 2013; 8: e75662.
9. Gantz BJ, Turner C, Gfeller KE. Acoustic plus electric speech processing: preliminary results of a multicenter clinical trial of the Iowa/Nucleus Hybrid implant. Audiol Neurotol 2006; 11:63–68.
10. Ingvalson EM, Lee B, Fiebig P, Wong PC. The effects of short-term computerized speech-in-noise training on postlingually deafened adult cochlear implant recipients. J Speech Lang Hear Res 2013; 56:81–88.
11. Steinmetzger K, Rosen S. The role of periodicity in perceiving speech in quiet and in background noise. J Acoust Soc Am 2015; 138:3586–3599.
12. Henshaw H, Ferguson MA. Efficacy of individual computer-based auditory training for people with hearing loss: a systematic review of the evidence. PLoS One 2013; 8:e62836.
13. Oba SI, Fu QJ, Galvin JJ. Digit training in noise can improve cochlear implant users’ speech understanding in noise. Ear Hear 2011; 32:573–581.
14. Song JH, Skoe E, Banai K, Kraus N. Training to improve hearing speech in noise: biological mechanisms. Cerebral Cortex 2011; 22:1180–1190.
15. Parbery-Clark A, Marmel F, Bair J, Kraus N. What subcortical–cortical relationships tell us about processing speech in noise. Eur J Neurosci 2011; 33:549–557.
16. Wong PC, Ettlinger M, Sheppard JP, et al. Neuroanatomical characteristics and speech perception in noise in older adults. Ear Hear 2010; 31:471.
17. Schumann A, Serman M, Gefeller O, Hoppe U. Computer-based auditory phoneme discrimination training improves speech recognition in noise in experienced adult cochlear implant listeners. Int J Audiol 2015; 54:190–198.
18. Miller JD, Watson CS, Kistler DJ, Wightman FL, Preminger JE. Preliminary evaluation of the speech perception assessment and training system (SPATS) with hearing-aid and cochlear-implant users. Proc Meet Acoust 2008; 2:1–9.
19. Tyler RS, Witt SA, Dunn CC, Wang W. Initial development of a spatially separated speech-in-noise and localization training program. J Am Acad Audiol 2010; 21:390–403.
20. Zhang T, Dorman MF, Fu QJ, Spahr AJ. Auditory training in patients with unilateral cochlear implant and contralateral acoustic stimulation. Ear Hear 2012; 33:e70–e79.
21. Fu QJ, Chinchilla S, Galvin JJ. The role of spectral and temporal cues in voice gender discrimination by normal-hearing listeners and cochlear implant users. J Assoc Res Otolaryngol 2004; 5:253–260.
22. Brand T, Kollmeier B. Efficient adaptive procedures for threshold and concurrent slope estimates for psychophysics and speech intelligibility tests. J Acoust Soc Am 2002; 111:2801–2810.
23. Wechsler D. Wechsler Intelligence Scale for Children-III. San Antonio: The Psychological Corporation; 1991.
24. Tombaugh TN. Trail Making Test A and B: normative data stratified by age and education. Arch Clin Neuropsychol 2004; 19:203–214.
25. Raven Manual, Section 1Standard Progressive Matrices. Oxford: Oxford Psychologist Press Ltd; 1998.
26. Kishon-Rabin L, Patael S, Menahemi M, Amir N. Are the perceptual effects of spectral smearing influenced by speaker gender? J Basic Clin Physiol Pharmacol 2004; 15:41–55.
27. Boothroyd A. Statistical theory of the speech discrimination score. J Acoust Soc Am 1968; 43:362–367.
28. Kollmeier B, Warzybok A, Hochmuth S, et al. The multilingual matrix test: principles, applications, and comparison across languages: a review. Int J Audiol 2015; 54:3–16.
29. Kishon-Rabin L, Taitelbaum R, Muchnik C, et al. Development of speech perception and production in children with cochlear implants. Ann Otol Rhinol Laryngol 2002; 111:85–90.
30. Wolfe J, Schafer EC, John A, Hudson M. The effect of front-end processing on cochlear implant performance of children. Otol Neurotol 2011; 32:533–538.
31. Hey M, Hocke T, Hedderich J, Müller-Deile J. Investigation of a matrix sentence test in noise: reproducibility and discrimination function in cochlear implant patients. Int J Audiol 2014; 53:895–902.
32. Sweetow RW, Sabes JH. The need for and development of an adaptive listening and communication enhancement (LACE™) program. J Am Acad Audiol 2006; 17:538–558.
33. Irvine DRF. Auditory perceptual learning and changes in the conceptualization of auditory cortex. Hear Res 2018; 366:3–16.
34. Lunner T, Sundewall-Thorén E. Interactions between cognition, compression, and listening conditions: effects on speech-in-noise performance in a two-channel hearing aid. J Am Acad Audiol 2007; 18:604–617.
35. Rudner M, Rönnberg J, Lunner T. Working memory supports listening in noise for persons with hearing impairment. J Am Acad Audiol 2011; 22:156–167.
36. Zhang M, Malysa C, Huettmeyer F, Piplica D, Schmidt B. Using the international classification of functioning model to gain new insight into the impact of cochlear implants on prelingually deafened recipients. J Speech Pathol Ther 2016; 1:1–6.
37. Karni A, Bertini G. Learning perceptual skills: behavioral probes into adult cortical plasticity. Curr Opin Neurobiol 1997; 7:530–535.
38. Karni A, Meyer G, Rey-Hipolito C, et al. The acquisition of skilled motor performance: fast and slow experience-driven changes in primary motor cortex. Proc Natl Acad Sci USA 1998; 95:861–868.
Keywords:

Auditory perceptual learning; Auditory training; Cochlear implants; Prelingual hearing loss; Speech perception in noise

Supplemental Digital Content

Copyright © 2019 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company