Secondary Logo

Journal Logo


Nonauditory Functions in Low-performing Adult Cochlear Implant Users

Völter, Christiane; Oberländer, Kirsten; Carroll, Rebecca; Dazert, Stefan; Lentz, Benjamin; Martin, Rainer; Thomas, Jan Peter

Author Information
doi: 10.1097/MAO.0000000000003033
  • Open


Although most cochlear implant (CI) recipients benefit from cochlear implantation with regard to speech recognition, great variability in individual speech outcome has been reported (1–3). While some CI users reach a high level of speech recognition after a few months, others continue to struggle. Even though this problem was already recognized more than 20 years ago (4), the “enigma” of poor performers still persists (5,6). Depending on the criteria used, 10 to 50% of adult CI users fall into the category of so-called “poor performers” (7,8). In the past, studies on CI users have mainly dealt either with “star performers” in speech recognition or with a more representative group of CI recipients with varying degrees of speech recognition performance (9,10). Low performers have rarely been the focus of attention (1,5). Considering the costs related to implantation and the high expectations of the CI candidates regarding improvement in hearing, speech perception, and quality of life, there is an obvious need to understand the cause of low performance and to find a solution to overcome it. Our aim was to find out whether deficits in top-down mechanisms such as neurocognitive and/or linguistic skills might explain the differences in speech recognition performance between high- and low-performing CI users.

Previous research in large multicenter studies including more than 2000 CI recipients has primarily been focused on hearing and device-related factors such as the number of active electrodes, the brand of device, the duration of hearing loss before implantation, the user's age, the etiology of hearing loss, the amount of residual hearing, or the use of hearing aids (6,10–14). However, no significant difference has been found between speech perception and the type of electrode array for CI recipients with a thin straight or a perimodiolar electrode array (15). These pre-, peri-, and postoperative factors might account for 10 to 20% of the variability in speech outcome in these patients (6,14).

In addition to “bottom-up” processes that are based on the sensory input, cognitive functioning and language knowledge allow for “top-down” mechanisms to compensate for unreliable bottom-up processing (1,16). According to one popular model pertaining to adults with hearing loss, the Ease of Language Understanding model (ELU) (17), there are two speech processing channels: the rapid-automatic effortless channel (implicit) which is predominantly used by normal hearing subjects in easy listening situations and the compensatory slow and effortful loop-back channel (explicit) for perceptually challenging situations. Following the implicit channel, the incoming words are encoded according to their phonological structure, bound in the episodic buffer, and compared to the phonological representation of the semantic long-term memory (16). Phonological sensitivity is a key element of implicit speech understanding. In contrast, the explicit channel uses semantic and general information to overcome the mismatch between the incoming signal and the encoded signal and largely depends on working memory capacity (18).

Due to the persistent mismatch between the incoming signal and the stored phono-lexical representation, episodic long-term memory is severely affected in people with long-term hearing loss (19,20). Even hearing rehabilitation does not improve performance in memory tasks as stated by Rönnberg et al. (20). While executive functions such as attention and working memory significantly improved after hearing aid use, long-term memory remained largely unchanged (16). A similar observation was made by Zhan et al. (21), who studied neurocognitive functions in 19 adult CI users 6 months after CI provision. Furthermore, auditory deprivation may lead to a degradation of phonological representations in the long-term memory and thereby reduce phonological sensitivity (22).

When listening to speech, words are primarily captured by their phonological structure. In the competition between phonologically related words, these words are selected based on the statistical probability according to which they are stored within the auditory input lexicon and which largely depends on word frequency and neighborhood density. However, in challenging listening situations, listeners use semantic context to get access to the meaning of a word without full auditory analysis (23). In people with normal hearing (NH) and ideal listening situations, both routes of information, the implicit direct, and the explicit feedback-loop, are used to process and understand as much information as possible according to Rönnberg's model (24).

In addition to phonological and language skills, cognitive functions play a crucial role in CI users’ speech perception. Correlations between speech perception and neurocognitive functions such as the inhibition-concentration, working memory, and information-processing speed have been described (21,25–27). CI users who receive degraded signals with limited spectral resolution and temporal fine-structural information might rely even more on cognitive and linguistic functions than NH (26). Differences in speech outcome may partly be due to suboptimal interaction of neurocognitive and linguistic functions in CI users who do not sufficiently benefit from implant use.

The aim of the present study was to focus on the group of low-performing CI users and to analyze the relationship between cognitive and linguistic skills in these subjects in comparison to high-performing users by applying visually presented linguistic and neurocognitive assessments. We asked whether differences in neurocognitive and linguistic skills may explain differences in speech recognition performance in these two groups. Instead of applying simple correlational analyses across a representative group of users, of which LP users would only be the minority, we contrasted the two extreme groups of CI users. This allowed us to test equally sized groups, thus focusing more on the cognitive and linguistic characteristics of the LP group. A better understanding of the difficulties that CI users with a poorer speech outcome face might help to make more concise predictions of speech perception in CI implantees and to develop focused rehabilitation strategies in these patients.


Study Samples

Records from patients aged 18 years or older, implanted at the Cochlear Implant Center between 2010 and 2017 have been studied retrospectively and searched for CI patients with a maximum postoperative speech perception of the implanted side of 30% as assessed by the Freiburger monosyllabic speech test at 65 dB (28). These CI patients were defined as low performing CI users (LP: low performer). Furthermore, cochlear implant users with a speech perception of 70% or more in the Freiburger monosyllabic test at 65 dB (HP: high performer) have been selected as a control group. Sentence recognition was assessed by the Hochmaier–Schulz–Moser (HSM) sentence test (29) in quiet at 65 dB in free field in a soundproof booth (DIN EN ISO 8253). Inclusion criteria and subjects‘ profile are listed in Tables 1 and 2. There were no significant differences between the two groups concerning age, duration of CI use, and hearing loss.

TABLE 1 - Inclusion criteria
Inclusion Criteria
 Postlingual onset of hearing loss and CI experience of ≥1 year
 Native or excellent German speaker
 No severe cognitive impairment, visual impairment or central nervous system disease
 Complete insertion of the electrode array

TABLE 2 - Profile of the subjects
Low Performer High Performer p
N 15 19
Female 10 12
Male 5 7 0.85
Age (yrs)
 Mean (SD) 71.6 (8.10) 66.95 (10.96)
 Median 72.0 70
 Range 56–84 46–85 0.27
CI experience (yrs)
 Mean (SD) 4.87 (3.14) 6.0 (4.84)
 Median 5.00 4.00
 Range 1–11 1–20 0.68
Freiburger monosyllabic test
 N 15 19
 Mean (SD) 15% (11.80) 80% (4.85)
 Median 15% 80% 0.00001∗∗∗
HSM in quiet
 N 14 15
 Mean (SD) 21% (27.44) 92% (6.56)
 Median 10% 94% 0.00001∗∗∗
∗∗∗Indicates p < 0.001.

Non-Auditory Test Battery

Different neurocognitive and linguistic assessments were applied after cochlear implantation: 1) the computer based neurocognitive ALAcog test battery, 2) the Text Reception Test, 3) the Lexical Decision Test, 4) the LEMO 2.0 subtest V9, and 5) the RAN (Rapid Automatic Naming Test). Some patients did not perform all subtests as they could not attend the second call due to illness or unwillingness, so the number of the LP and HP differs.

The ALAcog divided in 10 subtests covered various neurocognitive domains as previously described by Völter et al. (30,31):

  • 1) In the attentional task M3 a target letter and some distractors were presented to the subject who had to click on the target letter as fast as possible.
  • 2) For the recall and delayed recall task 10 words shown needed to be recalled immediately and after 30 minutes.
  • 3) To assess working memory, the 2-back test, which required a response when a letter shown was identical to the second last, and a dual task, where letters had to be memorized while equations had to be performed (Operation Span Task, OSPAN), were implemented.
  • 4) To assess the ability to inhibit responses a flanker task was included: subjects responded to a target, flanked by arrow pointers above, and underneath pointing in the same direction (compatible Flanker: cFlanker) or pointing into different directions (incompatible Flanker: iFlanker).
  • 5) In TMT A, which assesses processing speed, and TMT B, which measures executive functions, subjects were asked to connect randomly shown items in an ascending order (in TMT A numbers from 1 to 26, in TMT B numbers from 1 to 13 and letters from A to M) as quickly as possible.
  • 6) In the verbal fluency subtest, the subject had to name as many animals as possible starting with the letter “S” and with “B.”

For each test, total performance (inverse efficiency, IE) was calculated based on the time needed and the number of correct answers given. A low IE indicated good performance.

In the Text Reception Threshold Test (32), subjects read three sets of 20 visually presented sentences of the OLSA (Oldenburger Sentence Test) covered by a pattern of a certain degree. The score indicated the percentage of masking by periodic bars, floating bars, or random dots, to recognize 50% of the sentence.

The Lexical Decision Test (33) measured lexical access times. Subjects had to decide as quickly as possible if a presented letter combination constituted a real word or not. Half of the letter combinations were (either high- or low-frequent) existing words, the other half non-words. Reaction time to existing words indicated lexical access time. The reliability of the subject‘s answers was calculated by the sensitivity index d′.

The subtest V9 reading internally—phonological word or neologism from LEMO 2.0 (34) assessed the phonological input lexicon. The subject had to decide whether a graphemically presented neologism was phonologically homophonic (which does not exist but sounds like an existing word) or neologistic (which does not sound like a real word).

The rapid automatized naming (RAN), a subtest of the TEPHOBE (35,36), measured the automatic process of naming familiar things as quickly as possible. Ten lines, each with five objects, colors, letters, and numbers, were presented and the number of items mentioned per second was recorded.

Statistical Analyses

In a first step, descriptive statistics including mean, median, and standard deviation (SD) were used to analyze data from medical records as well as audiological, cognitive, and linguistic data in LP and HP. Inference statistical paired comparisons for neurocognitive and linguistic performance were then calculated as follows. For linear parameters, we used rank ANOVA according to Kruskal and Wallis (37) because the data were not always normally distributed and for discrete parameters, we applied the χ2 test. Effect size was calculated with Cohen's d. In a second step, a discriminant function analysis between LP and HP was carried out for neurocognitive and linguistic variables. First, a factor analysis was applied to find the underlying factors from which six variables (components) were extracted: 1) attentional and 2) automatization processes, (3) memory, (4) inhibitory abilities, (5) linguistic capacity, 6) working memory. Variables with the highest loading value of each component were included in the discriminant function analysis. Wilks’ lambda tells you how much the variables contribute to separate the two performance groups. The closer Wilks’ lambda is to 0, the more the variables contribute to the discriminant function. Furthermore, the discriminant value of the variable with the strongest discriminatory power was calculated (38). The significance levels were set as follows: p < 0.05, ∗∗p < 0.01, ∗∗∗p < 0.001. The statistical program used was Medas (C. Grund, Margetshöchheim, Germany).


Significant differences between LP and HP could be detected in various cognitive subdomains of the neurocognitive Alacog test battery (see Fig. 1A, Table 3): the most prominent difference was observed on the M3. HP performed better than LP in reaction time (HP = 747.42, LP = 1111.79, p = 0.013) as well as in the number of correctly given items (HP = 90.61, LP = 64.4, p = 0.002∗∗). Total IE (HP = 780.5, LP = 1282.67, p = 0.003∗∗) significantly differed.

FIG. 1:
A, Subtests of the Alacog test battery. Lower scores indicate better performance. indicates p < 0.05. ∗∗ indicates p < 0.01. B, Lexical Decision Test. Sensitivity d′. The higher the score, the better the sensitivity. ∗∗ indicates p < 0.01. C, LEMO 2.0 Subtest V9 (phonological word or neologism). Higher scores indicate better performance. indicates p < 0.05. ∗∗ indicates p < 0.01. D, Rapid automatized naming. Higher scores indicate better performance. indicates p < 0.05. ∗∗ indicates p < 0.01. E, Discriminant function analysis of rapid automatized naming subtest numbers of low performers (LP) and high performers (HP). Higher scores indicate better performance. Discriminant level was set at 2.22 (grey bar). 92% of the LP scaled below and 85% of the HP above this level.
TABLE 3 - Inverse efficiency (IE) in the Alacog-Subtests of low performer (LP) and high performer (HP)
Neurocognitive Subtests N Mean SD p Cohen's d
 LP 15 1282.67 618.60
 HP 18 780.50 221.80 0.003∗∗ 1.12
 LP 15 538.00 182.76
 HP 18 416.11 219.66 0.12 0.6
Delayed recall
 LP 15 692.67 162.46
 HP 18 511.67 234.38 0.04 0.88
 LP 14 762.93 538.10
 HP 18 573.89 185.60 0.22 0.50
 LP 15 789.53 386.60
 HP 18 468.11 248.60 0.0068∗∗ 1.01
 LP 15 412.73 103.62
 HP 18 374.33 82.41 0.33 0.41
 LP 15 552.27 148.91
 HP 18 477.22 111.67 0.037 0.58
 LP 15 898.60 448.10
 HP 18 626.56 216.80 0.053 0.8
 LP 15 1737.53 1005.00
 HP 18 1022.83 429.00 0.018 0.96
Verbal fluency
 LP 15 817.00 87.03
 HP 18 742.50 97.37 0.025 0.8
Indicates p < 0.05.
∗∗Indicates p < 0.01.Lower scores indicate better performance.

In the recall task LP (538.00) performed insignificantly worse than HP (416.11), whereas in the delayed recall task LP remembered with a mean IE of 692.67 significantly less words than HP with a mean IE of 511.67 (p = 0.04).

In the 2-back test, no significant differences could be detected between the groups neither with regard to the total IE (LP = 762.93, HP = 573.89, p = 0.22) nor with regard to the reaction time (LP = 492.48, HP = 488.53, p = 0.84) or correct reactions (LP = 24.0, HP = 27.1, p = 0.24).

In working memory assessed by the OSPAN, HP (468.11) revealed a highly significant better outcome than LP (789.53) in the total IE (p = 0.0068∗∗, d = 1.01). This was due to poorer performance of LP concerning the number of correctly solved mathematical equations (LP = 36.07, HP = 38.83, p = 0.034) and slower reaction times (LP = 3609.08, HP = 2579.06, p = 0.013). The number of memorized items did not significantly differ (LP = 1.9, HP = 2.73, p = 0.17). Inhibitory control was poorer in LP. This was significant for the incompatible stimuli iFlanker (LP = 552.27, HP = 477.22, p = 0.037). LP (4.27) made more mistakes than HP (2.22), although the difference was not significant (p = 0.098).

In the TMT tasks, HP outperformed the LP group regarding to the total IE and the reaction time. This was significant only in the TMT B concerning total IE (HP = 1022.83, LP = 1737.53, p = 0.018). The difference in the TMT A between LP (898.60) and HP subjects (626.56) was not significant (p = 0.053).

Concerning verbal fluency, LP listed fewer words and obtained poorer IE scores (817.00) than HP (742.50, p = 0.025).

Further differences in linguistic performance were found: HP (n = 17) outperformed LP (n = 13) in the Text Reception Threshold Test (see Table 4) in all three conditions: periodic bars (LP = 38.34, HP = 47.28, p = 0.00002∗∗∗), floating bars (LP = 42.60, HP = 51.45, p = 0.00021∗∗∗), and random dots (LP = 37.87, HP = 45.92, p = 0.026). The percentage of unmasked text needed to decode 50% of the words in a sentence was significant lower for HP. They could understand the meaning of a sentence even when more than 47% of its content was covered in contrast to 38% in LP.

TABLE 4 - Results of the Text Reception Threshold test (TRT) of low performer (LP) and high performer (HP)
TRT N Mean SD p Cohen's d
Periodic bars
 LP 13 38.34 7.82
 HP 17 47.28 3.29 0.00002∗∗∗ –1.57
Floating bars
 LP 13 42.60 10.30
 HP 17 51.45 2.78 0.00021∗∗∗ –1.25
Random dots
 LP 13 37.87 9.57
 HP 17 45.92 7.77 0.026 –0.94
Indicates p < 0.05.
∗∗∗Indicates p < 0.001.Higher scores indicate better performance

Data of the Lexical Decision Test were collected from 14 LP and 18 HP (see Fig. 1B, Table 5). The total Lexical Decision Score of correct answers differed between LP (68.14) and HP (73.44, p = 0.0076∗∗). In addition, LP (2.53) gave less reliable answers than HP (4.12, p = 0.0021∗∗). Furthermore, LP showed a worse outcome in the reaction time for the existing words (1033.21) than HP (784.68, p = 0.017). Thereby, lexical access assessed by reaction times for correct detection of existing words was significantly slower in LP than in HP.

TABLE 5 - Results of the LEMO 2.0 Subtest V9, the Lexical Decision Test, and the rapid automatized naming (RAN) in low performer (LP) and the high performer (HP)
N Mean SD p Cohen's d
Lexical Decision Test
 Existing words reaction time (ms)
  LP 14 1033.21 382.30
  HP 18 784.68 194.10 0.017 0.85
 Sensitivity d
  LP 14 2.53 1.16
  HP 18 4.12 1.32 0.0021∗∗ –1.27
 Total correct
  LP 15 66.13 9.00
  HP 19 74.63 4.62 0.0039∗∗ –1.23
RAN (items/s)
  LP 14 1.12 0.20
  HP 18 1.38 0.20 0.0026∗∗ –1.28
  LP 14 1.20 0.29
  HP 18 1.43 0.26 0.031 –0.82
  LP 14 1.98 0.58
  HP 18 2.52 0.29 0.0026∗∗ –1.25
  LP 14 2.04 0.57
  HP 18 2.65 0.34 0.0038∗∗ –1.34
Indicates p < 0.05.
∗∗Indicates p < 0.01.In the LEMO as well as in the RAN higher scores indicate better performance. In the Lexical Decision Test lower scores indicate better performance.

A significant difference was observed in the performance of LP and HP in the LEMO subtest 9, which measures the phonological input lexicon (see Fig. 1C, Table 5). LP (66.13) gave fewer correct answers than HP (74.63, p = 0.0039∗∗). Differences could also be found concerning the two subgroups of words, the neologisms (LP = 31.73, HP = 37.16, p = 0.013) and the phonological words (LP = 35.07, HP = 37.47, p = 0.038). LP tended to categorize a graphemic neologism as a real word. According to the classification of Stadie et al. (34) more than 80% of HP belonged to the normal range. 53.3% of LP had a reduced performance. Additionally, more than 10% of LP were unable to judge. This was true for none of HP (p = 0.0077∗∗).

Moreover, HP were faster than LP in rapid naming objects (LP = 1.12, HP = 1.38, p = 0.0026∗∗), colors (LP = 1.20, HP = 1.43, p = 0.031), letters (LP = 1.98, HP = 2.52, p = 0.0026∗∗), and numbers (LP = 2.04, HP = 2.65, p = 0.0038∗∗) (see Fig. 1D, Table 5). This difference was mostly remarkable with regard to object and letter naming.

The discriminant function analysis based on M3, RAN numbers, Recall, cFlanker, Lexical Decision, and 2-back subtests clearly discriminated LP and HP (canonic r = 0.68, p = 0.0073, Wilkes’ lambda = 0.53). Rapid automatic naming of numbers (r = 0.56) and lexical sensitivity score (r = 0.54) had the highest discriminatory power. Neurocognitive skills were less important to discriminate the two performing groups: only attention score assessed by the M3 showed a strong power to discriminate (r = 0.50), all other variables revealed only a slight impact (recall r = 0.29, inhibition assessed by cFlanker r = 0.21 and working memory r = 0.24). RAN of numbers predicted poor speech performance in 91.7% (RAN score of ≤2.22) and good performance in 80.6% (RAN score of >2.22) (see Fig. 1E).


Speech understanding is a complex dynamic process including multiple factors which influence each other (39). An important part is the acoustic input altered in various ways by the underlying hearing impairment, the signal coding in the CI and restrictions of the electrode–nerve interface. This has recently been demonstrated in 11 high and 10 low performing CI users. Spectral resolution had the highest impact on speech performance assessed by the spectral temporally modulated ripple test (SMRT) (1).

Beside bottom-up also top-down mechanisms play a significant role in speech understanding. Various studies have described the role of neurocognitive functions on speech outcome in NH, hearing impaired, and CI users (40–43). So far there are only a few studies that focus on poorly performing CI users who represent only a small portion of all CI recipients (1,5,7,39). Furthermore, commonly used auditory based neurocognitive test batteries might also lead to confusing results in severely hearing-impaired (44).

In general, the ability to recognize detailed phonological structures (phonological sensitivity) in visually presented rhyme judgment tasks is worse in hearing impaired than in NH subjects (22). Prolonged hearing loss can even lead to a degeneration of long-term phonological representations (45). A similar observation was recently published by Smith et al. (43) on cochlear implant recipients aged 9 to 29 years. Here, CI users achieved significantly poorer results in repeating nonsense words than subjects with NH and rapid phonological coding significantly correlated with speech perception in CI listeners.

In our study, phonological sensitivity and phonological input lexicon assessed by the LEMO were poorer in LP than in CI users with a good speech outcome. LP had significantly more problems in recognizing orthographically presented neologisms which sound like real words (homophonic) or which do not sound like real words (phonologically neologistic). In general, words in this task must be read via the grapheme-phoneme correspondence line and compared with the input lexicon via the phonological loop (34). The most prominent difference was observed with regard to neologisms that were often wrongly categorized as real words by LP.

Impaired lexical representations might also explain the significant differences in lexical access. LP made significantly more errors in the lexical decision task than HP and the lexical access time was significantly longer in existing words. One might speculate that due to an insufficient phonological processing, storage and word retrieval is more difficult in LP, as the comparison between lexicon and subsequent output take more time and lead to a higher number of errors.

Nagels et al. (46) analyzing 15 well performing CI users aged 30 to 73 years also found that CI users had lower accuracy scores and longer reaction times than NH listeners, especially for nonwords in a lexical decision task and that CI users with a low word-nonword sensitivity had a longer lexical competition time and relied more on semantic competitors in the context condition than those with a high sensitivity. These behavioral data correspond to functional neuroimaging results in rhyming tasks on written words in good and poor CI performers as found by Lazard et al. (47).

Furthermore, deficits in automatization processes and processing speed caused by prolonged auditory deprivation may hamper the implicit processing of speech understanding in LP (25). In the Rapid Automatized Naming Test, which assesses the automatization process, HP significantly outperformed LP and RAN performance had the highest impact on speech recognition.

Besides the implicit, the compensatory explicit channel also seems to be affected in poor performing subjects (18). Significant differences in attention and working memory in highly demanding tasks as the OSPAN were observed in LP, especially in those with an extraordinary low speech perception of 20% or less. Better performing CI users seem to make a more sufficient use of the explicit processing loop, which requires good working-memory and a high level of attention to match the degraded signal with the storage in the long-term memory, as described in a visually presented rhyme-judgment task in hearing impaired subjects (19). However, this may not be the case if sensory input is too small to benefit from top-down processing as already stated by Tamati et al. (1). Degradation in the bottom-up speech perception with cochlear implants might prevent the benefit from top-down speech perception mechanisms as demonstrated by Başkent (48) who studied phonemic restoration of interrupted sentences in normal-hearing listeners using a noise-band vocoder simulation.


CI users are not a homogenous group with regard to their cognitive and linguistic abilities and strategies of speech perception might be differently implemented in speech processing. Rapid phonological coding skills seem to be critical for speech recognition in CI recipients. Most low performers had highly impaired access to phonological structures and automatic processing, good cognitive skills could only partially compensate this deficit in some subjects leading to improved speech performance.

So far it is not known whether postoperative auditory rehabilitation interventions based on the individual linguistic and cognitive pattern might help poorly performing CI users to reach a higher level of speech recognition. Some patients might benefit from an analytic training on phonological processing structures, whereas in others a synthetic training of compensatory mechanisms or basal auditory functions might be more effective.

Besides, studies should be conducted to clarify whether modified signal processing strategies adapted to the individual profile of the implant recipient could lead to overcome the enigma of poor performing CI users as recently reported in hearing aid subjects (49). However, inclusion of a comprehensive linguistic and neurocognitive test battery in the assessment tools of CI candidates may allow to better predict postoperative speech outcome.


The authors thank Ursula Lehner-Mayrhofer, MED-EL for editing a version of this manuscript, T. Brand and J. Müller, Medical Physics Group Oldenburg who provided framework for the TRT as well as I. Haubitz, Würzburg for statistical analysis and W. Oschmann for data collection.


1. Tamati TN, Ray C, Vasil KJ, Pisoni DB, Moberly AC. High- and low-performing adult cochlear implant users on high-variability sentence recognition: differences in auditory spectral resolution and neurocognitive functioning. J Am Acad Audiol 2020; 31:324–335.
2. Green KMJ, Bhatt YM, Mawman DJ, et al. Predictors of audiological outcome following cochlear implantation in adults. Cochlear Implants Int 2007; 8:1–11.
3. Holden LK, Finley CC, Firszt JB, et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear 2013; 34:342–360.
4. Blamey P, Arndt P, Bergeron F, et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants. Audiol Neurootol 1996; 1:293–306.
5. Moberly AC, Bates C, Harris MS, Pisoni DB. The enigma of poor performance by adults with cochlear implants. Otol Neurotol 2016; 37:1522–1528.
6. Blamey P, Artieres F, Başkent D, et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: an update with 2251 patients. Audiol Neurootol 2013; 18:36–47.
7. Lenarz M, Sönmez H, Joseph G, Büchner A, Lenarz T. Long-term performance of cochlear implants in postlingually deafened adults. Otolaryngol Head Neck Surg 2012; 147:112–118.
8. Rumeau C, Frère J, Montaut-Verient B, Lion A, Gauchard G, Parietti-Winkler C. Quality of life and audiologic performance through the ability to phone of cochlear implant users. Eur Arch Otorhinolaryngol 2015; 272:3685–3692.
9. Hillyer J, Elkins E, Hazlewood C, Watson SD, Arenberg JG, Parbery-Clark A. Assessing cognitive abilities in high-performing cochlear implant users. Front Neurosci 2018; 12:1056.
10. Chen SY, Grisel JJ, Lam A, Golub JS. Assessing cochlear implant outcomes in older adults using HERMES. Otol Neurotol 2017; 38:e405–e412.
11. Sharma RK, Chen SY, Grisel J, Golub JS. Assessing cochlear implant performance in older adults using a single, universal outcome measure created with imputation in HERMES. Otol Neurotol 2018; 39:987–994.
12. Hoppe U, Hocke T, Hast A, Iro H. Maximum preimplantation monosyllabic score as predictor of cochlear implant outcome. HNO 2019; 67:62–68.
13. Leung J, Wang N-Y, Yeagle JD, et al. Predictive models for cochlear implantation in elderly candidates. Arch Otolaryngol Head Neck Surg 2005; 131:1049–1054.
14. Lazard DS, Vincent C, Venail F, et al. Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS One 2012; 7:e48739.
15. Moran M, Vandali A, Briggs RJS, Dettman S, Cowan RSC, Dowell RC. Speech perception outcomes for adult cochlear implant recipients using a lateral wall or perimodiolar array. Otol Neurotol 2019; 40:608–616.
16. Rönnberg J, Lunner T, Zekveld A, et al. The ease of language understanding (ELU) model: theoretical, empirical, and clinical advances. Front Syst Neurosci 2013; 7:31.
17. Rönnberg J, Holmer E, Rudner M. Cognitive hearing science and ease of language understanding. Int J Audiol 2019; 58:247–261.
18. Rönnberg J, Rudner M, Foo C, Lunner T. Cognition counts: a working memory system for ease of language understanding (ELU). Int J Audiol 2008; 47: (suppl): S99–S105.
19. Classon E, Rudner M, Rönnberg J. Working memory compensates for hearing related phonological processing deficit. J Commun Disord 2013; 46:17–29.
20. Rönnberg J, Danielsson H, Rudner M, et al. Hearing loss is negatively related to episodic and semantic long-term memory but not to short-term memory. J Speech Lang Hear Res 2011; 54:705–726.
21. Zhan KY, Lewis JH, Vasil KJ, et al. Cognitive functions in adults receiving cochlear implants. Otol Neurotol 2020; 41:e322–e329.
22. Lyxell B, Andersson J, Andersson U, Arlinger S, Bredberg G, Harder H. Phonological representation and speech understanding with cochlear implants in deafened adults. Scand J Psychol 1998; 39:175–179.
23. Winn MB. Rapid release from listening effort resulting from semantic context, and effects of spectral degradation and cochlear implants. Trends Hear 2016; 20:1–17.
24. Magnuson JS, Dixon JA, Tanenhaus MK, Aslin RN. The dynamics of lexical competition during spoken word recognition. Cogn Sci 2007; 31:133–156.
25. Moberly AC, Mattingly JK, Castellanos I. How does nonverbal reasoning affect sentence recognition in adults with cochlear implants and normal-hearing peers? Audiol Neurootol 2019; 24:127–138.
26. Moberly AC, Houston DM, Castellanos I. Non-auditory neurocognitive skills contribute to speech recognition in adults with cochlear implants. Laryngoscope Investig Otolaryngol 2016; 1:154–162.
27. Lyxell B, Andersson U, Borg E, Ohlsson IS. Working-memory capacity and phonological processing in deafened adults and individuals with a severe hearing impairment. Int J Audiol 2009; 42: (suppl): S86–S89.
28. Hahlbrock K-H. Speech audiometry and new word tests [in German]. Archiv f. Ohren-, Nasen- u. Kehlkopfheilkunde 1953; 162:394–431.
29. Hochmair-Desoyer I, Schulz E, Moser L, Schmidt M. The HSM sentence test as a tool for evaluating the speech understanding in noise of cochlear implant users. Am J Otol 1997; 18:S83.
30. Falkenstein M, Hoormann J, Hohnsbein J. ERP components in Go/Nogo tasks and their relation to inhibition. Acta Psychol (Amst) 1999; 101:267–291.
31. Völter C, Götze L, Falkenstein M, Dazert S, Thomas JP. Application of a computer-based neurocognitive assessment battery in the elderly with and without hearing loss. Clin Interv Aging 2017; 12:1681–1690.
32. Zekveld AA, George ELJ, Kramer SE, Goverts ST, Houtgast T. The development of the text reception threshold test: a visual analogue of the speech reception threshold test. J Speech Lang Hear Res 2007; 50:576–584.
33. Carroll R, Warzybok A, Kollmeier B, Ruigendijk E. Age-related differences in lexical access relate to speech recognition in noise. Front Psychol 2016; 7:990.
34. Stadie N, Cholewa J, de Bleser R. LEMO 2.0: Lexicon model-orientated: diagnostic tool for aphasia, dyslexia and dysgraphia [in German]. Hofheim: NAT-Verlag; 2013.
35. Mayer A. Rapid automatized Naming (RAN) and Reading [in German]. Forschung Sprache 2018; 6:20–41.
36. Mayer A. Test for measuring phonological awareness and naming speed (TEPHOBE): Manual [in German]. 3rd ed.München, Basel: Ernst Reinhardt Verlag; 2016.
37. Kruskal WH, Wallis WA. Use of ranks in one-criterion variance analysis. J Am Stat Assoc 1952; 47:583–621.
38. Cooley WW, Lohnes PR. Multivariate Data Analysis. New York: Wiley and Sons Inc; 1971.
39. Pisoni DB, Kronenberger WG, Harris MS, Moberly AC. Three challenges for future research on cochlear implants. World J Otorhinolaryngol Head Neck Surg 2017; 3:240–254.
40. Akeroyd MA. Are individual differences in speech reception related to individual differences in cognitive ability? A survey of twenty experimental studies with normal and hearing-impaired adults. Int J Audiol 2008; 47: (suppl): S53–S71.
41. AuBuchon AM, Pisoni DB, Kronenberger WG. Verbal processing speed and executive functioning in long-term cochlear implant users. J Speech Lang Hear Res 2015; 58:151–162.
42. Kaandorp MW, Smits C, Merkus P, Festen JM, Goverts ST. Lexical-access ability and cognitive predictors of speech recognition in noise in adult cochlear implant users. Trends Hear 2017; 21:1–15.
43. Smith GNL, Pisoni DB, Kronenberger WG. High-variability sentence recognition in long-term cochlear implant users: associations with rapid phonological coding and executive functioning. Ear Hear 2019; 40:1149–1161.
44. Dupuis K, Pichora-Fuller MK, Chasteen AL, Marchuk V, Singh G, Smith SL. Effects of hearing and vision impairments on the Montreal Cognitive Assessment. Neuropsychol Dev Cogn B Aging Neuropsychol Cogn 2015; 22:413–437.
45. Moberly AC, Harris MS, Boyce L, Nittrouer S. Speech recognition in adults with cochlear implants: the effects of working memory, phonological sensitivity, and aging. J Speech Lang Hear Res 2017; 60:1046–1061.
46. Nagels L, Bastiaanse R, Başkent D, Wagner A. Individual differences in lexical access among cochlear implant users. J Speech Lang Hear Res 2019; 63:286–304.
47. Lazard DS, Lee H-J, Gaebler M, Kell CA, Truy É, Giraud A-L. Phonological processing in post-lingual deafness and cochlear implant outcome. Neuroimage 2010; 49:3443–3451.
48. Başkent D. Effect of speech degradation on top-down repair: phonemic restoration with simulations of cochlear implants and combined electric-acoustic stimulation. J Assoc Res Otolaryngol 2012; 13:683–692.
49. Yumba WK. Cognitive processing speed, working memory, and the intelligibility of hearing aid-processed speech in persons with hearing impairment. Front Psychol 2017; 8:1308.

Cochlear implantation; Linguistic skills; Low performer; Neurocognitive functions

Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of Otology & Neurotology, Inc.