Journal Logo

Research Article

Relationship Between Speech Recognition in Quiet and Noise and Fitting Parameters, Impedances and ECAP Thresholds in Adult Cochlear Implant Users

de Graaff, Feike1; Lissenberg-Witte, Birgit I.2; Kaandorp, Marre W.1; Merkus, Paul1; Goverts, S. Theo1; Kramer, Sophia E.1; Smits, Cas1

Author Information
doi: 10.1097/AUD.0000000000000814

Abstract

INTRODUCTION

Speech recognition performance varies highly among cochlear implant (CI) users. The variance in speech recognition can still not be fully explained, but many factors potentially contributing to the large variation in outcome have been identified (Blamey et al. 1996; Finley et al. 2008; Lazard et al. 2012; Blamey et al. 2013). Some studies showed that patient characteristics, such as age, duration of deafness, etiology, and linguistic and cognitive factors partly explain the variance in speech recognition (Lazard et al. 2012; Blamey et al. 2013; Holden et al. 2013; Kaandorp et al. 2017; James et al. 2019). In addition, device and implant factors are related to speech recognition outcomes. Examples of these factors include electrode positioning, electrode insertion depth, and the number of inserted or active electrodes (Skinner et al. 2002; Yukawa et al. 2004; Finley et al. 2008; Lazard et al. 2012; Esquia Medina et al. 2013; Holden et al. 2013; James et al. 2019). Also, fitting of CI processors is essential to achieve optimal speech recognition for CI users. Many studies have shown the effect of fitting parameters on speech recognition (Skinner et al. 1999; Loizou et al. 2000; James et al. 2003; Skinner 2003; Spahr & Dorman 2005; Holden et al. 2011; Van der Beek et al. 2015; Busby & Arora 2016). The identification of possible effects of changing fitting parameters on speech recognition can help to guide clinicians and improve fitting practices. Even so, there is no commonly accepted good clinical practice for fitting CIs.

The present study aims to add to previous research by using prediction models to identify parameters that relate to speech recognition in quiet and noise in a group of adult Cochlear CI users. Only Cochlear CI users were included, because they form the largest group of adult CI candidates in our CI center. A large group is needed because the number of variables that can potentially predict speech recognition is large. Parameters used during this study were those that can be adjusted by the audiologist during a fitting session and those that may change between fitting sessions. These included T and C levels, electrical dynamic range (DR), aided sound field thresholds, electrically evoked compound action potential (ECAP) thresholds, and electrode impedances, but also parameters that are related to the profile of T levels, C levels, impedances, and ECAP thresholds. The rationale for focusing on important fitting parameters, ECAP thresholds and impedances is that (1) these parameters are available to the fitting audiologist during a fitting session, and (2) these parameters can be adjusted or may change between fitting sessions. Vaerenberg et al. (2014b) conducted a global survey on fitting practices and found considerable differences between CI centers, but they also concluded that all CI centers focus on the setting of stimulation levels based on psychophysically derived measures of threshold (i.e., T level for Cochlear) and comfort (i.e., C level for Cochlear). Although a large number of fitting parameters and other measures are available to the clinician during a fitting session for Cochlear sound processors, other parameters (e.g., speech coding strategy, pulse width, stimulation rate, gain, Q factor, frequency allocation table, number of maxima) are usually set at default. Therefore, the present study focuses on the fitting parameters that are most often manipulated by audiologists (i.e., T and C levels and the DR).

An important goal of fitting CI sound processors is to maximize the use of the DR of the auditory nerve by setting T and C levels for each electrode. The electrical DR is covered by the difference between T and C levels. C levels that are set either too low or too high may have a negative impact on speech recognition and sound quality (Wolfe & Schafer 2015). Setting T levels too low (i.e., below hearing threshold) results in the inaudibility of soft sounds, while T levels that are set too high will result in ambient sounds that may be too loud (Wolfe & Schafer 2015; Busby & Arora 2016). It has been reported that a high variability of T levels across electrodes, due to variations in the electrode-to-neuron distance and neural survival, can negatively impact speech recognition as well (Pfingst & Xu 2004,2005; Zhou & Pfingst 2014). Aided sound field thresholds are often assessed to determine the audibility of soft sounds with targets usually set at 20 to 30 dB HL. The aided thresholds are related to the T-SPL (default is set at 25 dB SPL), which relates the minimum intensity input level to the electrical stimulation at T level, the microphone sensitivity and T levels. If T levels are set correctly, aided thresholds should be around the target level of 25 dB HL (i.e., with T-SPL set at default and sensitivity at 12; Cochlear 2012).

Multiple studies have investigated the use of ECAP as an alternative to behavioral parameters in the fitting of adult and pediatric patients (Brown et al. 2000; Hughes et al. 2000; Franck & Norton 2001; Smoorenburg et al. 2002; Botros & Psarros 2010a, b). The ECAP represents the response of the auditory nerve after electrical stimulation, and can be measured using the technology incorporated in modern CIs. In CI users with a Cochlear device, the technique is called neural response telemetry (NRT). Although the majority of studies only found a weak to moderate correlation between NRT thresholds and stimulation levels, studies have shown the clinical relevance of NRT measurements. For instance, the NRT threshold profile can be used by clinicians to determine the profile of stimulation levels (Brown et al. 2000; Hughes et al. 2000; Franck & Norton 2001; Smoorenburg et al. 2002; Botros & Psarros 2010b).

Electrode functioning is regularly assessed at the beginning of fitting sessions through electrode impedance testing. Electrode impedance is a measure of the resistance to electrical current flow across an electrode (Wolfe & Schafer 2015). Once electrodes are stimulated, the impedances decrease and usually remain stable up to, at least, 24 months after implantation (Hughes et al. 2001). Changes in electrode impedance can indicate changes in the surrounding tissue or electrode function (i.e., short or open circuits) which may negatively affect patient performance.

Objective of Present Study

The objective of this study was to identify parameters which affect speech recognition of CI users and thus be considered to improve current fitting practices. Clinical data of postlingually deafened unilaterally implanted adult Cochlear device users were used. Prediction models for speech recognition in quiet and in noise were built using fitting parameters (related to T and C levels, DR), sound field–aided thresholds, and objective measures (i.e., NRT thresholds and impedances) that are available to the clinician during a fitting session and can be adjusted or may change between fitting sessions. A total of 33 parameters were considered. Other factors (e.g., age, duration of deafness, etiology of deafness, electrode position, cognitive and linguistic abilities) were not included in the models. The prediction models were built separately for two groups of CI users. Separate analyses were performed for postlingually deafened CI users with late onset (LO) and CI users with early onset (EO; i.e., before the age of 7 years) of severe hearing impairment, because we speculated that optimal fitting parameters could be different for these groups. For instance, Vargas et al. (2013) found that patients with hearing experience previous to cochlear implantation have lower electrical thresholds and larger DRs than patients affected by long-term profound deafness. Prelingually deafened adults were excluded, because speech recognition is often limited in this group of patients (Teoh et al. 2004).

MATERIALS AND METHODS

Rehabilitation and General Fitting Procedures for Adult CI Users in Amsterdam UMC, Location VUmc

Because large variations exist between CI centers on all aspects of fitting (Vaerenberg et al. 2014b), we briefly describe the general fitting procedure and rehabilitation program of CI users in our clinic. Although small differences in fitting procedures between audiologists in our CI center may exist, the fitting procedures used for the patients in the present study can be considered largely similar.

In our CI center, the rehabilitation program for newly implanted CI users comprises weekly visits to the clinic up to 6 weeks after initial activation of the sound processor, 3 visits in the following 5 months, and annual follow-up visits thereafter. During the first weeks of rehabilitation, emphasis is put on fitting of the sound processor and auditory rehabilitation. Two basic principles guide the fitting, mainly by changing T and C levels. First, we want to use the entire DR of the auditory nerve and, second, soft sounds should be audible. In general, the speech coding strategy and its specific parameters are initially set at default and are rarely modified. In our clinic, the default speech coding strategy is advanced combination encoder, stimulation mode is MP1 + 2 (monopolar) with a stimulation rate of 900 Hz, pulse width of 25 µsec, 8 maxima, standard frequency allocation table, and a Q value of 20. In addition, we normally use no channel gain, set the sensitivity at 12 and the volume at 10. Next to fitting of the sound processor, speech recognition performance and aided thresholds are assessed and the outcomes are used for optimization of the fitting.

Fitting of Cochlear devices is done with the Custom Sound programming software. In our center, each fitting session generally starts by measuring electrode impedances in all four electrode coupling modes to identify open or short circuits. Subsequently, stimulation levels are psychophysically determined. T and C levels are generally assessed on a subset of electrodes, and the levels of intermediate electrodes are then interpolated. T levels are determined by presenting a stimulus (i.e., a train of biphasic pulses with a stimulation rate of 900 pps and a duty cycle of 500 msec) in a descending procedure where CI users are instructed to raise their hand or say “yes” when they hear the stimulus. C levels are determined using a loudness scaling method in which the clinician gradually increases the presentation level of a stimulus. The CI users are asked to indicate their loudness percept by pointing to categories on a 10-point loudness scale. C levels are set at a level that is “loud.” Subsequently, all C levels are decreased by a certain percentage of the DR. Then, the sound processor is switched to live speech mode and the clinician increases C levels while the CI user listens to speech and louder sounds (e.g., clapping hands) to find the user’s most comfortable level. Loudness balancing across electrodes is used during some fitting sessions to ensure that the CI user perceives the stimuli to be equally loud. Here, sets of four adjacent electrodes are stimulated at C level using the sweep functionality of Custom Sound, and individual C levels are adjusted until the CI user reports equal loudness of all four electrodes.

NRT measurements are performed on all electrodes intraoperative, and on a subset of electrodes during some of the fitting sessions in the first year postoperative and at annual visits.

Study Population

We retrospectively identified CI users who visited the Amsterdam UMC, location VUmc, for their annual follow-up between January 2015 and December 2017. The data of the most recent annual follow-up were used for CI users who had multiple follow-ups in this time span. Postlingually deafened Cochlear CI users who were unilaterally implanted at our CI center after the age of 18 years were included. All participants were experienced users with more than 1 year CI experience and at least 6 months experience with the CI settings they used during speech recognition testing. Of this group, CI users with strongly deviating parameters (e.g., speech coding strategy other than advanced combination encoder and more than 3 electrodes disabled) were excluded. Our final study population consisted of 138 patients. Implant and processor details are listed in Table 1.

TABLE 1.
TABLE 1.:
Type of implant and sound processor parameters

The final study population was split into two groups; a group with postlingually deaf adult patients with LO of severe hearing impairment (LO group, n = 97) and another group with postlingually deaf adult patients with EO of severe hearing impairment (EO group, n = 41). CI users in this group include CI users who were fitted with hearing aids before the age of 7 years, or went to a school for the deaf. The mean age at the time of the annual follow-up was 67.9 years (SD = 13.4) and 49.6 years (SD = 13.4), the mean age at implantation was 62.0 years (SD = 13.4) and 44.0 years (SD = 13.8), and the mean CI experience was 5.9 years (SD = 3.5) and 5.6 years (SD = 3.5) for the LO and EO group, respectively.

Outcome Measures

Speech recognition in quiet and in noise (procedures described in the paragraphs later) was assessed in a sound-treated booth, where CI users were seated in front of a loudspeaker at a distance of approximately 70 cm. For the purpose of this study, speech recognition in quiet and in noise scores that were assessed with the CI alone were used.

Speech Recognition in Quiet •

Speech recognition in quiet was assessed with monosyllabic words with a consonant–vowel–consonant (CVC) structure, pronounced by a female Dutch speaker (Bosman & Smoorenburg 1995). CVC words were presented in quiet at 65 dB SPL. Each CVC word consisted of three phonemes, and the score of the CVC test in quiet was calculated as the percentage of phonemes correct. Typically, 3 lists of 12 words were presented, but occasionally, CI users were presented with less than 3 lists of CVC words. The mean percentage of phonemes correct of the presented lists (i.e., two or three lists) was calculated, omitting the first CVC word of each list.

Speech Recognition in Noise •

Speech recognition in noise was assessed with the standard digits-in-noise test (Smits et al. 2013; Kaandorp et al. 2015). The digits-in-noise test was developed to primarily measure auditory speech recognition abilities in noise. Thus, the test result depends minimally on top-down processing like linguistic skills and cognition. It is less representative of real-life listening than a sentence test, but it may be more appropriate than a sentence test for the evaluation of CI fitting (Smits et al. 2013). Other studies (Kaandorp et al. 2015) showed a strong relationship between speech reception thresholds (SRTs) measured with digits-in-noise and SRTs measured with sentences for CI users. Twenty-four digit-triplets were presented in steady state speech-shaped noise using an adaptive procedure, with the overall presentation level of target speech and masking noise fixed at 65 dBA. The digits-in-noise test assesses the SRT, which is defined as the signal to noise ratio (SNR) in dB at which a listener correctly recognizes 50% of the digit-triplets. Typically, 2 lists of 24 digit-triplets were presented to assess the mean SRT, but occasionally, the SRT was assessed with only 1 list of digit-triplets. In that case, the SRT assessed with one list was used for the analyses. The SRT was not assessed in 4 CI users. These CI users were only included in the analyses for speech recognition in quiet.

Independent Variables

Speech recognition scores, and volume and sensitivity settings, program, and MAP number were registered on a special form with a checklist for the clinician. If the form was not filled in, data were retrieved using electronic patient files, database from the audiometer, and the fitting software Custom Sound. Independent variables were analyzed as continuous variables, unless stated otherwise.

Fitting Parameters

The present study focuses on the fitting parameters that are most often manipulated by audiologists (i.e., T and C levels and the DR), as reported by Vaerenberg et al. (2014b). T and C levels depend on stimulation rate, pulse width, and volume settings. It is therefore important to correct the T and C levels if the stimulation rate, pulse width, and volume settings were not at default at the time of speech recognition assessment. The stimulation levels were converted before the analyses. The actual T and C levels, stimulation rate, pulse width, and volume setting were used to calculate the corresponding stimulation levels with a volume setting of 10, stimulation rate of 900 Hz and pulse width of 25 µsec. The conversion for pulse width was based on constant pulse charge and for stimulation rate a linear function was used (Cochlear, Reference Note 1). A correction was applied for 59 out of 138 CI users. The correction was mainly due to the volume not being set at 10 (n = 52). If volume settings that were used during speech recognition assessment were not available, C levels could not be converted and were therefore considered as missing data (n = 5). The DR was calculated by subtracting the corrected T levels from the corrected C levels. The corrected T and C levels were used to calculate the DR, because the DR would be different if, for instance, the volume was shifted from the default value of 10 to a lower value of 6. The mean, SD, and range (highest minus lowest) of T levels, C levels, and DR were calculated to describe the profile of stimulation levels over 22 electrodes (i.e., profile and variation). In addition, across-site variation (ASV) of T and C levels and DR was determined. Here, the mean absolute difference in levels between adjacent electrodes was calculated (Pfingst et al. 2004). T and C levels and DR were not always available for each electrode because of disabled electrodes. In that case, the next available electrode was used to calculate the ASV. All parameters are listed in Table 2.

TABLE 2.
TABLE 2.:
Univariate correlations between candidate predictors and speech recognition in quiet and in noise for the LO of severe hearing impairment and EO of severe hearing impairment groups

Aided-Thresholds, NRT Thresholds, and Impedances •

Sound field–aided thresholds at octave frequencies from 125 to 8 kHz were measured with narrowband (1/3 octave) noise stimuli and averaged to obtain the mean-aided sound field threshold (Table 2, audiometry). T-SPL relates the minimum intensity input level to the electrical stimulation at T level. C-SPL relates the maximum intensity input level to the electrical stimulation at C level. Both T-SPL and C-SPL depend on the sensitivity setting, and were therefore corrected for the sensitivity setting before the analyses. The difference between the mean-aided sound field threshold and corrected T-SPL was calculated by subtracting the corrected T-SPL from the aided threshold. Data were considered as missing if either the aided thresholds were not assessed (n = 2) or if T-SPL could not be converted (n = 3).

NRT thresholds were measured intraoperatively and during annual visits with the autoNRT functionality of Custom Sound. The NRT thresholds measured intraoperatively were generally assessed at all electrodes, while NRT thresholds measured at annual visits were assessed on a selection of electrodes (i.e., commonly measured electrodes in our center are electrodes 1, 2, 6, 11, 16, and 22). NRT thresholds cannot be assessed with autoNRT in certain types of implants (CI24RCS and CI24RCA) or when patients indicate that stimuli are too loud. If NRT thresholds were not assessed during the annual visit, the most recently measured thresholds were used, but only if these thresholds were measured more than 1 year after implantation (n = 28). Otherwise, data were considered as missing (see Table 2 for the number of missing values). The mean, SD, range, and ASV were calculated for the NRT thresholds measured intraoperatively (Table 2, NRT intraoperative) and during the annual visit (Table 2, NRT postoperative). These measures were included to describe the profile of NRT thresholds. The mean and mean absolute differences were calculated between NRT thresholds measured intraoperatively and NRT thresholds measured during the annual visit, using the electrodes that were assessed during the annual visit. In addition, absolute and mean differences between the current C levels and NRT thresholds measured intraoperatively and during the annual visit were calculated at electrodes that were used for the measurements at implantation and annual visit, respectively. These differences were included because in our CI center, NRT thresholds and profiles are often used as a guide for setting C levels in children. Finally, the NRT threshold of the most frequently assessed electrode (electrode 22) during the annual visit was included.

There are four different measures of impedances available; monopolar 1 (MP1), monopolar 2 (MP2), monopolar 1 + 2 (MP1 + 2), and common ground. Because the correlation between the different impedance measures was very strong (r > 0.9), we opted to use impedances measured in MP1 + 2 mode (i.e., corresponding to the commonly used MP1 + 2 stimulation mode). Here, an intracochlear electrode is chosen as the active electrode and both extracochlear electrodes (MP1 + 2) are chosen as return electrodes. The mean and SD were calculated, in addition to the mean and absolute differences in impedance between adjacent electrodes (see Table 2 in the main text and Figure 1 in Supplemental Digital Content, http://links.lww.com/EANDH/A585). Disabled electrodes were not included in the calculation of the different impedance measures. If impedances were not measured during the annual visit (n = 5), the most recently measured impedances were used, but only if these were measured more than 1 year after implantation and, if possible, at the time of other objective measures (i.e., NRT measurements).

Statistical Analyses

All statistical analyses were performed with SPSS, version 22.0. Speech recognition in quiet scores was transformed to rationalized arcsine units (Sherbecoe & Studebaker 2004) to normalize variance across the range of scores. A log-transformation [ln(SRT + 7)] was performed on the SRT data to achieve a normal distribution.

Overall, NRT measures had a considerable amount of missing data (Table 2), which were assumed to be missing at random. Multiple imputation was used to handle this type of missing data (Sterne et al. 2009; Netten et al. 2017). First, the distributions of the different NRT variables were visually inspected. Not-normally distributed variables were transformed using a log-transformation. Second, missing data were imputed using linear regression, and the imputation was repeated 10 times. Finally, variables that were log-transformed before imputation were back transformed. Some of the data could not be imputed, because they were not missing at random. As mentioned earlier in the “Aided-thresholds, NRT thresholds, and impedances” section, NRT measurements are not possible with certain older implants. Therefore, the imputed data for users of these implants were deleted, but data of these CI users were included in the remaining analyses. The statistical analyses described later were performed on the imputed dataset. Pooled results over the 10 imputation databases are reported, unless no missing data were present.

Study Population •

First, effect modification for the onset of severe hearing impairment variable (LO group versus EO group) was investigated. The effect modification was investigated by adding an interaction term with the independent variable to the univariate linear regression analyses, and the grouping variable itself. Effect modification was found for many of the independent variables. Therefore, the statistical analyses were done separately for the two groups and stratified models are presented.

Descriptive Statistics •

Univariate associations between outcome measures and independent variables were tested using Pearson correlations, and median and range were calculated (Table 2). Independent samples t tests were conducted to test for significant differences between groups for speech recognition in quiet and in noise.

Prediction Models: All Parameters •

The models were built separately for the LO and EO group and separately for speech recognition in quiet and in noise, resulting in four different models. Independent variables with a univariate p < 0.2 were considered as candidate predictors and were selected for the multivariable linear regression model. First, the linear relationship between the candidate predictors and outcome measure was examined. This was tested by dividing the continuous variable in quartiles and by plotting the quartile mean against the regression coefficient. Candidate predictors with a nonlinear relationship with the outcome measures were categorized in four groups with approximately the same sample size. This categorization was done separately for the LO and EO group. The category with the lowest value was considered as the reference category. A forward selection procedure then was applied to select predictor variables (p entry was set at 0.05), due to the high number of candidate predictors. A p < 0.05 was considered statistically significant. Regression coefficients, 95% confidence intervals and p values are reported.

Prediction Models: Parameters Related to T and C Levels, DR, Aided Thresholds and Mean (Absolute) Difference Between NRT Threshold and C Level •

Separate prediction models were built that included only those parameters that can be adjusted by the clinician (i.e., 16 parameters related to T and C levels, DR, sound field–aided thresholds, and mean [absolute] difference between NRT thresholds and C levels). The aim of the prediction models with a selection of parameters was not to explain as much variance in speech recognition as possible, but to identify important parameters that can be adjusted by clinicians to optimize speech recognition of CI users. Again, four different models were built (separate models for speech recognition in quiet and in noise, and separate models for the LO and EO groups), using a forward selection procedure. Contrary to the procedure described in the “Prediction models: All Parameters’ section, the parameters denoted with † in Table 2 were included in the selection procedure (i.e., 16 candidate predictors in total). Thus, the parameters that were included were parameters related to T and C levels and DR measures, mean-aided thresholds, and mean (absolute) differences between NRT thresholds and C level. The remaining procedure was similar to the procedure described earlier in the “Prediction Models: All Parameters” section.

RESULTS

Mean speech recognition in quiet was 82.9% (SD = 12.5%) for the LO group and 79.2% (SD = 11.8%) for the EO group. The mean SRT for the adaptive digits-in-noise test was −0.8 dB SNR (SD = 3.4 dB SNR) for the LO group and 1.3 dB SNR (SD = 3.6 dB SNR) for the EO group. Speech recognition in quiet was not significantly different between groups. The mean SRT was significantly worse for the EO group compared with the LO group. Speech recognition scores in quiet and in noise are shown in Figure 1. The figure shows a large variance in SRT for CI users with similar speech recognition in quiet scores. The correlation between speech recognition in quiet and in noise scores were r = −0.46 and r = −0.70 for the LO and EO groups, respectively.

Fig. 1.
Fig. 1.:
Scatterplot and bar chart showing speech recognition scores for CI users from the LO group (n = 97, white symbols), and CI users from the EO group (n = 41, gray symbols). Speech recognition in quiet and in noise was assessed with CVC words and digit-triplets, respectively. CI indicates cochlear implant; CVC, consonant–vowel–consonant; EO, early onset; LO, late onset; SNR, signal to noise ratio; SRT, speech reception threshold.

Table 2 shows the univariate correlations with speech recognition in quiet and speech recognition in noise for the LO and EO groups. Note that the number of CI users included in the prediction models differs, because of missing data.

Speech Recognition in Quiet

All Parameters •

Table 3 shows the final multivariable prediction models of speech recognition in quiet for the LO and EO groups. A total of 33 parameters were considered, all related to T and C levels, DR, aided thresholds, NRT thresholds, and impedances. Candidate predictors with p < 0.2 were entered in the prediction models (in bold in Table 2). There were 4 significant predictors out of 33 candidate predictors for speech recognition in quiet in the LO group: mean-aided thresholds, mean absolute difference in impedances, mean DR, and the SD of impedances. Poorer speech recognition in quiet was found for CI users with mean-aided thresholds higher than 27 dB HL compared with those with mean-aided thresholds lower than 24 dB HL. The mean absolute difference in impedances, which describes the profile of impedances across the electrode array, was related to speech recognition in quiet. More specific, speech recognition in quiet was lower for CI users with a large mean absolute difference in impedances. Furthermore, a DR between 40 to 50 and 50 to 60 CL yielded better speech recognition in quiet than a smaller DR of less than 40 CL. Finally, one other aspect of impedance measurements was related to speech recognition in quiet, which is the SD of impedances. This parameter reflects the variation in impedances across the electrode array. Users with a SD of impedances between 1.12 and 1.53 kΩ had better speech recognition compared with CI users with a SD of impedances less than 1.12 kΩ. Note that parameters related to the profile of T-levels, C-levels, NRT thresholds or differences between C-levels and NRT thresholds were not significantly related to speech recognition in quiet. The total variance in speech recognition in quiet explained by the mean-aided thresholds, mean absolute difference in impedances, mean DR, and SD of impedances was 26%.

TABLE 3.
TABLE 3.:
Final multivariable prediction models with fitting parameters, aided thresholds, NRT thresholds, and impedances for the LO group (left column) and EO group (right column) for speech recognition in quiet

For the EO group, results were different; only one significant predictor of speech recognition in quiet was found. In this group, CI users with the highest mean T levels (i.e., above 135 CL) had worse speech recognition in quiet than CI users with the lowest mean T levels (i.e., lower than 120 CL). The total variance explained by the model was 20%.

Parameters Related to T and C Levels, DR, Aided Thresholds and Mean (Absolute) Difference Between NRT Threshold and C Level •

Table 4 shows the final multivariable prediction models of speech recognition in quiet explained with parameters related to T and C levels, DR, mean-aided thresholds, and mean (absolute) difference between NRT thresholds and C levels. The results of these prediction models were similar to the results of the prediction models described earlier, with the exception of the impedance measures that were not included in this model.

TABLE 4.
TABLE 4.:
Multivariable prediction model for speech recognition in quiet with fitting parameters, aided thresholds, and mean (absolute) differences between NRT thresholds and C levels for the LO (left column) and EO (right column) groups

Two out of 16 candidate predictors were significant predictors of speech recognition in quiet in the LO group: mean-aided thresholds and mean DR. CI users had worse speech recognition in quiet if they had mean-aided thresholds higher than 27 dB HL compared with CI users with mean-aided thresholds less than 24 dB HL. CI users with a mean DR of 50 to 60 CL had better speech recognition in quiet than CI users with a smaller DR of less than 40 CL. The variance explained by the total model was 13%.

The prediction model with the subset of parameters in the EO group gave the same result as with “All parameters” (see earlier).

Speech Recognition in Noise

All Parameters •

Table 5 shows the final multivariable prediction models of speech recognition in noise for the LO and EO groups. All parameters related to T and C levels, DR, aided thresholds, NRT thresholds, and impedances were considered (i.e., 33 candidate predictors). Candidate predictors with p < 0.2 were entered in the selection procedure (in bold in Table 2). The prediction model for speech recognition in noise showed much overlap with the prediction model for speech recognition in quiet. Significant predictors of speech recognition in noise in the LO group were mean-aided thresholds, mean absolute difference in impedances, and mean DR. Poorer SRTs were found for CI users with mean-aided thresholds higher than 27 dB HL compared with CI users with mean aided thresholds better than 24 dB HL. Furthermore, the mean SRT was worse when the mean absolute difference in impedances was large (i.e., >0.725 kΩ). Finally, CI users with a mean DR of 40 to 50 CL had better SRTs compared with CI users with a mean DR of less than 40 CL. Similar to speech recognition in quiet parameters related to the profile of T levels, C levels, NRT thresholds or differences between C levels and NRT thresholds were not significantly related to speech recognition in noise. The total variance in speech recognition in noise explained by the model was 14%.

TABLE 5.
TABLE 5.:
Final multivariable prediction models with fitting parameters, aided thresholds, NRT thresholds, and impedances for the LO group (left column) and EO group (right column) for speech recognition in noise

The prediction model for speech recognition in noise in the EO group was similar to the prediction model for speech recognition in quiet in the EO group. There was only 1 significant predictor of speech recognition in noise: mean T level. Similar to speech recognition in quiet, CI users with higher mean T levels had a worse SRT. The variance in speech recognition explained by the mean T level was 14%.

Parameters Related to T and C Levels, DR, Aided Thresholds, and Mean (Absolute) Difference Between NRT Threshold and C Level •

The multivariable models that were built with parameters related to T and C levels, DR, mean-aided thresholds, and mean (absolute) difference between NRT thresholds and C levels as predictors for speech recognition in noise in the LO and EO groups are presented in Table 6.

TABLE 6.
TABLE 6.:
Multivariable prediction model for speech recognition in noise with fitting parameters, aided thresholds, and mean (absolute) differences between NRT thresholds and C levels for the LO (left column) and EO (right column) groups

For the LO group, there was only 1 out of the 16 candidate predictors that predicted speech recognition in noise: mean-aided thresholds. CI users with higher mean-aided thresholds, between 27 and 30 dB HL, had a worse SRT compared with CI users with mean-aided thresholds better than 24 dB HL. The multivariable model explained only 5% of the variance in speech recognition in noise in the LO group.

There were three parameters that were significant predictors of speech recognition in noise in the EO group: mean T level, range in DR, and mean-aided thresholds. The mean T level appeared to be a significant predictor in the model for speech recognition in noise, such that with higher T levels, the SRT worsened. Furthermore, CI users with a range in DR (i.e., difference between the highest DR at an electrode minus the lowest DR at an electrode) between 12 and 21 CL had higher SRTs compared with CI users with a range in DR less than 12 CL (CI users with a relatively constant DR across electrodes). Finally, CI users with higher mean-aided thresholds had higher SRTs. The multivariable model resulted in an explained variance in speech recognition in noise of 34%.

DISCUSSION

The objective of this study was to identify parameters which are related to speech recognition in quiet and in noise of CI users. These parameters may be important to improve current fitting practices. Prediction models were built with a subset of parameters that are available to an audiologist during a fitting session. The models were built separately for speech recognition in quiet and in noise and for different groups of CI users, namely postlingually deafened CI users with LO and EO of severe hearing impairment. A total of 33 parameters were investigated.

For the LO group, elevated mean-aided thresholds were found to have a negative relation with speech recognition in quiet and in noise. As an example, mean speech recognition in quiet was 90.5% for CI users with aided thresholds better than 24 dB HL compared with 77.9% for CI users with aided thresholds of 27 to 30 dB HL. The mean SRT was −2.0 dB SNR for CI users with aided thresholds less than 24 dB HL versus −0.3 dB SNR for CI users with aided thresholds between 27 and 30 dB HL. CI users with a larger mean DR (i.e., between 40 and 60 CL) had better speech recognition both in quiet and in noise than CI users with a mean DR of less than 40 CL. Furthermore, the mean absolute difference in impedances between adjacent electrodes and the SD of impedances across the electrode array were found to be associated with speech recognition in quiet and in noise. For the EO group, higher mean T levels were associated with worse speech recognition in quiet and in noise. Parameters related to C levels and NRT thresholds were not related to speech recognition in this study.

T Levels

If T levels are set correctly, they should represent the minimum electrical current which just yields an auditory percept. Many studies reported on the effects of adjustments to T levels on speech recognition in quiet and in noise (Skinner et al. 1999; Pfingst et al. 2004; Spahr & Dorman 2005; Dawson et al. 2007; Van der Beek et al. 2015; Busby & Arora 2016). Skinner et al. (1999) found improvements in speech recognition when T levels were raised. Lowering or elevating T levels reduces speech recognition in quiet (Dawson et al. 2007; Busby & Arora 2016). Several studies (Pfingst & Xu 2004,2005; Zhou & Pfingst 2014) reported also that high variability of T levels across electrodes can negatively impact speech recognition, but we did not find such an effect. It must be noted however that considerable differences in mapping between Cochlear and other CI devices exist (e.g., default T levels in Advanced Bionics devices are set at 10% of M levels). Thus, conclusions drawn from studies with a certain brand of CIs may not apply for other CI brands.

Several mechanisms may play a role in the reported relation between T levels and speech recognition performance. For example, relations between the stimulation levels and the electrode’s radial distance from the modiolus have been reported with higher stimulation levels (i.e., higher T levels) for greater distances (Saunders et al. 2002; Long et al. 2014; DeVries et al. 2016). The electrode’s radial distance from the modiolus differs between types of electrode array. Straight electrode arrays have a closer proximity to the lateral wall than perimodiolar arrays. While this may have an effect on both T and C levels, it is unlikely that it influences DR or aided thresholds. Spiral ganglion cell degeneration along the cochlea may also result in higher T levels (Long et al. 2014). Spiral ganglion cell degeneration is expected in CI users with longer durations of deafness, thus in the EO group. This could explain our findings observed in the EO group, and might also explain the absence of a relation between T levels and speech recognition in the LO group. We therefore argue that the negative relationship between mean T levels and speech recognition found in our model may reflect a poor electrode-neuron interface rather than a nonoptimal fitting of the speech processor. It is also possible that the CI users in the EO group are less able to provide reliable feedback when thresholds are assessed with soft stimuli. This will most likely result in T levels that are set too high. If so, using aided thresholds may be more reliable to verify T levels than the psychophically determined T levels for some CI users. If T levels and aided thresholds are verified and appear to be set correctly, lowering T levels has limited value.

Electrical DR

Fitting of the sound processor often aims at maximizing the use of the auditory nerve’s DR, with T levels set at threshold and increasing the C levels. The present study supports previous findings showing that the magnitude of the DR has an effect on speech recognition (Blamey et al. 1992; Loizou et al. 2000; Pfingst & Xu 2005; Van der Beek et al. 2015). The DR was found to have a relation with speech recognition performance for users in the LO group only. The dissimilarity in findings between the EO and LO group was unexpected and may be caused by the smaller sample size of the EO group compared with the LO group, different hearing loss etiologies or the larger spread of age at onset of severe hearing impairment or deaf education in the EO group.

In our CI center, C levels are increased during the first weeks after the initial fitting to allow CI users to acclimatize to the increasing loudness while the DR increases. Based on the current findings, expanding the DR to a range between 40 and 60 CL seems a good approach. However, adjustments of T and/or C levels to obtain aided thresholds around the target level and the preferred DR may not always be acceptable for individual CI users. Careful counseling and an adaption period is needed to find out whether CI users can accept these new settings and assessment of speech recognition should be done to determine possible improvements in speech recognition scores.

C Levels

C levels were not found to be a predictor of speech recognition in any of our prediction models. The correlation between T and C levels was moderate to strong (r = 0.78 and r = 0.75 for the LO and EO groups, respectively). Because candidate predictors with a strong correlation generally do not end up in the same prediction model, we built a new prediction model for speech recognition in noise for the EO group only, but excluded T levels from the forward selection procedure. For speech recognition in noise in the EO group, C levels indeed became significant predictors, explaining the same amount of variance as T levels did in the original prediction model. Now worse speech recognition in quiet was found for CI users with higher mean C levels. This might be related to poorer neural survival or greater distances between the electrode and modiolus, as explained in the T level paragraph. In addition, current spread due to higher stimulation levels is known to increase the risk of channel interactions which also inhibits speech recognition (Jones et al. 2013).

Mean-Aided Thresholds

Several studies have shown a significant relationship between aided sound field thresholds and speech recognition (Skinner et al. 1999; James et al. 2003; Skinner 2003; Firszt et al. 2004; Davidson et al. 2009; Holden et al. 2011,2013,2019; Busby & Arora 2016). The current results support this finding. Multiple studies have investigated the effect of setting T levels above or below hearing thresholds in people using the Cochlear Nucleus system (Skinner et al. 1999; Zeng & Galvin 1999; Franck et al. 2003; Dawson et al. 2007; Zhou & Pfingst 2014; Busby & Arora 2016). Elevated aided thresholds are the result of too lowly set T levels (Vaerenberg et al. 2014a; Busby & Arora 2016), because with such T levels, stimuli will be presented below the actual hearing thresholds. T levels that are set properly will result in aided thresholds at 25 dB HL (i.e., at T-SPL) when the sensitivity is set at 12. Sometimes, T levels are intentionally set below the psychophysically determined threshold, because CI users complain about soft ambient sounds that are perceived too loud. CI users can also opt to lower the microphone sensitivity, which will also result in higher aided thresholds. Both may result in poorer speech recognition. Furthermore, aided thresholds might be elevated in CI users with a so-called T-tail. A T-tail refers to regions with very slow loudness-growth near threshold levels (Donaldson & Allen 2003). In case of a T-tail, there is limited change in loudness percept in the lower part of the DR, across a wide range of stimulation levels. To eliminate slow loudness growth regions, T levels should be raised to the point at which the loudness begins to grow with increasing stimulus levels (Wolfe & Schafer 2015).

Current and previous findings underline the importance of measuring aided thresholds and set them at the correct level for optimal speech recognition. This is assumed to be important for CI users in the EO group as well. Daily communication often includes listening to soft speech. Optimal audibility at soft levels is required. In case of elevated aided thresholds, clinicians should explain the importance of lowering the aided thresholds for optimal speech recognition to these CI users and encourage them to attempt to acclimatize to louder ambient sounds after raising T levels. Changing the sensitivity to a lower, less sensitive, setting should be discouraged for the same reason. Our results suggest that T levels should be determined precisely to prevent stimulation below hearing thresholds. The current procedure for setting T levels requires feedback from the CI user which might be difficult, especially when stimulation is near threshold levels. Several methods to determine T levels and improve aided thresholds have been proposed (refer to Skinner et al. 1999 and Rader et al. 2018, among others).

It should be noted that if T levels are set too high, it will not have an effect on the aided threshold, but it may be reflected in the feedback of CI users who complain about soft environmental sounds that are perceived too loud.

NRT Thresholds

The use of NRT thresholds as an alternative for behavioral fitting has been widely studied (see He et al. 2017 and de Vos et al. 2018, for an overview). In our CI center, NRT thresholds are often used as a guide to set C levels in children. Based on clinical observations, we hypothesized that smaller differences between C levels and NRT thresholds would be associated with better speech recognition. We did not find such a relationship. Other studies demonstrated a weak to moderate (r = 0.58) correlation between NRT thresholds and C levels (de Vos et al. 2018). We explored the data and found correlations of r = 0.38 to 0.67 between NRT thresholds and C levels for the most frequently assessed electrodes (i.e., electrodes 1, 2, 6, 11, 16, 22). The correlation was strongest for electrode 16 and weakest for electrode 1 (Figure 2). For electrode 16, the mean difference between NRT thresholds and C levels was 3.0 CL. Around 50% of the C levels were within 10 CL of the NRT thresholds. It is important to note that the C levels were corrected for volume, pulse width, and stimulation rate if deviating from the reference settings (i.e., volume = 10, pulse width = 25 µsec, and stimulation rate = 900 Hz). This correction is important to retain the original relation between C levels and NRT thresholds. For instance, if C levels are shifted because of changes in the volume setting (i.e., from the default setting of 10 to a lower value of 6), the relation between C levels and NRT thresholds will be shifted as well.

Fig. 2.
Fig. 2.:
Scatterplot of NRT thresholds vs. C levels for electrodes 1, 16, and 22. The dotted line represents equal NRT thresholds and C levels. NRT indicates neural response telemetry.

The results of the present study suggest that fitting based on NRT thresholds may not be considered a complete alternative for behavioral fitting in adult CI users. However, NRT thresholds might give a good first indication of stimulation levels if behavioral fitting proves to be difficult, for instance in children. In addition to the application of NRT thresholds in fitting, there are numerous other applications of NRT thresholds that are currently being studied and might be of value for clinical practice, for instance to estimate the neural survival of auditory nerve fibers (see He et al. 2017, for an overview).

Impedances

Impedances are frequently assessed as a measure of electrode functioning. With the different impedance measures, we aimed to describe the profile of impedances across the electrode array. The general assumption is that impedance profiles should not show an erratic pattern, but should be relatively flat showing only mild variations (Wolfe & Schafer 2015). Impedances are related to the resistive characteristics of the electrode surroundings. The electrode position influences the tissue and fluid surrounding the electrode (Swanson et al. 1995). The tissue and fluid change, for instance, in case of proximity to the modiolus, or partial insertion in the scala vestibuli instead of the scala tympani. The present study showed that variations in impedances across the electrode array, leading to the higher mean absolute differences between adjacent electrodes, result in worse speech recognition in quiet and in noise. Possibly, translocation of the electrode array from the scala tympani to the scala vestibuli leads to variations in impedances across the electrode array, and could subsequently lead to poorer speech recognition due to damage to cochlear structures. Studies have shown that impedances can be different for basal and apical electrodes, and for different types of electrodes (Busby et al. 2002; Saunders et al. 2002). Exploration of our data revealed that CI users with a mean absolute difference in impedances above 0.725 kΩ were relatively more often implanted with electrodes other than the CI24RECA implant (i.e., the CI512 implant), but because both implants are perimodiolar with half-band electrodes of equal surface areas, this does not explain the finding. Of the CI users in the LO group implanted with the CI512 implant, 73% had a mean absolute difference above 0.725 kΩ. The CI422SRA implant is a lateral wall implant which may have a substantial effect on impedances.

The results of this study suggest that the profile of impedances across the electrode array is more important for speech recognition performance than the mean value of impedances. Clinicians should therefore measure impedances and evaluate the impedance profile, in addition to possible short or open circuits and changes in impedances over time. If erratic profiles of impedances are found, re-programming might be necessary to improve speech recognition performance or to avoid out of compliance issues. Erratic profiles of impedances might also be suggestive of insulation damage (Cullington 2013). Clinicians might also opt for an integrity test and should counsel CI users about their expectations in terms of speech recognition (i.e., more erratic profiles might lead to poorer speech recognition).

Strengths and Limitations

A strength of the present study is the inclusion of experienced postlingually deafened adult CI users who are homogeneous with respect to CI center, CI brand, speech processing strategy, stimulation rate, pulse width, number of maxima, and who are rehabilitated in a team with a small number of surgeons, audiologists, and speech pathologists. Thus, treatment and CI fitting in this study can be considered fairly similar across the participants. The results of this study might not be directly applicable to CI users of other CI centers, because of the differences in fitting practices between CI centers, or generalized to other groups of CI users (e.g., CI users of other brands or prelingually deaf adults).

The CI users in our study were implanted with different implant models (i.e., perimodiolar or lateral wall arrays), which could have influenced T and C levels, DRs, and electrode impedances. We conducted additional univariate analyses to investigate the relation between type of implant and speech recognition, but these analyses did not show a significant relation.

The explained variance of the prediction models described in this study was limited. For the LO group, it ranged between 13% and 26% for speech recognition in quiet, and 5 to 14% for speech recognition in noise. For the EO group, the explained variance for speech recognition in quiet was 20%, and ranged from 14 to 34% for speech recognition in noise.

The different NRT threshold variables had a considerable number of missing values. We applied multiple imputation, but the final multivariable prediction models did not include any of the NRT variables as predictors. Thus, the prediction models would have been the same if the analyses would have been done on the original data without imputation. Data were also missing on several other parameters. All cases were included in the analyses; however, multivariate linear regression analysis only includes complete cases. This has therefore resulted in different numbers of CI users included in the different prediction models, which limited the statistical power.

Clinical Implications

The results of this study may guide audiologists in their fitting practices to help improve the speech recognition performance of CI users. Prediction models were used to identify determinants of speech recognition in quiet and in noise. It must be noted however that not all of these parameters can be manipulated or changed in the fitting of every individual CI user. Currently, we are conducting a study to find out if CI users who could theoretically benefit from refitting can adapt to new settings after an extensive counseling session and show improved speech recognition scores.

Clinicians should measure aided thresholds and emphasize to CI users the importance of having thresholds at 25 dB HL approximately for optimal speech recognition. If aided thresholds are above the target level, clinicians should try to raise T levels and counsel CI users to get accustomed to ambient sounds and discourage them to lower the sensitivity. The DR should preferably be between 40 and 60 CL, by setting T levels at threshold and increasing C levels, if possible. Finally, clinicians should pay attention to profiles of impedances. Reprogramming might be necessary to improve speech recognition performance or to avoid out of compliance issues and an integrity test could be scheduled to check the status of the electrode array for atypical impedance profiles. CI users should also be counseled to manage their expectations about speech recognition in such cases.

CONCLUSIONS

In conclusion, we were able to identify parameters related to speech recognition in quiet and in noise in two groups of CI users (i.e., EO and LO of severe hearing impairment). The predictors found in this study are consistent with those found in previous research, and were very similar for speech recognition in quiet and in noise, which suggests that optimizing speech recognition in quiet will also optimize speech recognition in noise, or will at least not be at the expense of speech recognition in noise. Important parameters in the group of CI users with LO of severe hearing impairment were the mean-aided thresholds, DR, and measures to express the impedance profile across the electrode array. Elevated aided thresholds were related to worse speech recognition in quiet and in noise. CI users with a larger DR were found to have better speech recognition, both in quiet and in noise. In the group of CI users with EO of severe hearing impairment, worse speech recognition in quiet and in noise was found for CI users with higher T levels. Future research should assess the clinical relevance of the predictors identified in this study.

ACKNOWLEDGMENTS

F.d.G. and C.S. designed the study and organized and carried out the data collection. F.d.G. and B.I.L.-W. analyzed the data. All authors participated in the interpretation of the data. F.d.G. had the leading role in the writing process. All authors revised the manuscript critically for important intellectual content and approved the current version to be submitted to Ear and Hearing.

REFERENCES

Blamey P. J., Pyman B. C., Gordon M., et al. Factors predicting postoperative sentence scores in postlinguistically deaf adult cochlear implant patients. Ann Otol Rhinol Laryngol, (1992). 101, 342–348.
Blamey P., Arndt P., Bergeron F., et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants. Audiol Neurootol, (1996). 1, 293–306.
Blamey P., Artieres F., Başkent D., et al. Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: An update with 2251 patients. Audiol Neurootol, (2013). 18, 36–47.
Bosman A. J., Smoorenburg G. F.. Intelligibility of Dutch CVC syllables and sentences for listeners with normal hearing and with three types of hearing impairment. Audiology, (1995). 34, 260–284.
Botros A., Psarros C. Neural response telemetry reconsidered: II. The influence of neural population on the ECAP recovery function and refractoriness. Ear Hear, 2010a). 31, 380–391.
Botros A., Psarros C. Neural response telemetry reconsidered: I. The relevance of ECAP threshold profiles and scaled profiles to cochlear implant fitting. Ear Hear, 2010b). 31, 367–379.
Brown C. J., Hughes M. L., Luk B., et al. The relationship between EAP and EABR thresholds and levels used to program the nucleus 24 speech processor: data from adults. Ear Hear, (2000). 21, 151–163.
Busby P. A., Arora K. Effects of threshold adjustment on speech perception in Nucleus cochlear implant recipients. Ear Hear, (2016). 37, 303–311.
Busby P. A., Plant K. L., Whitford L. A.. Electrode impedance in adults and children using the Nucleus 24 cochlear implant system. Cochlear Implants Int, (2002). 3, 87–103.
Cochlear L. Clinical Guidance Document. (2012). Sydney, Australia: Cochlear, Ltd.
Cullington H. E.. Managing cochlear implant patients with suspected insulation damage. Ear Hear, (2013). 34, 515–521.
Davidson L. S., Skinner M. W., Holstad B. A., et al. The effect of instantaneous input dynamic range setting on the speech perception of children with the nucleus 24 implant. Ear Hear, (2009). 30, 340–349.
Dawson P. W., Vandali A. E., Knight M. R., et al. Clinical evaluation of expanded input dynamic range in Nucleus cochlear implants. Ear Hear, (2007). 28, 163–176.
de Vos J. J., Biesheuvel J. D., Briaire J. J., et al. Use of electrically evoked compound action potentials for cochlear implant fitting: A systematic review. Ear Hear, (2018). 39, 401–411.
DeVries L., Scheperle R., Bierer J. A.. Assessing the electrode-neuron interface with the electrically evoked compound action potential, electrode position, and behavioral thresholds. J Assoc Res Otolaryngol, (2016). 17, 237–252.
Donaldson G. S., Allen S. L.. Effects of presentation level on phoneme and sentence recognition in quiet by cochlear implant listeners. Ear Hear, (2003). 24, 392–405.
Esquia Medina G. N., Borel S., Nguyen Y., et al. Is electrode-modiolus distance a prognostic factor for hearing performances after cochlear implant surgery? Audiol Neurootol, (2013). 18, 406–413.
Finley C. C., Holden T. A., Holden L. K., et al. Role of electrode placement as a contributor to variability in cochlear implant outcomes. Otol Neurotol, (2008). 29, 920–928.
Firszt J. B., Holden L. K., Skinner M. W., et al. Recognition of speech presented at soft to loud levels by adult cochlear implant recipients of three cochlear implant systems. Ear Hear, (2004). 25, 375–387.
Franck K. H., Norton S. J.. Estimation of psychophysical levels using the electrically evoked compound action potential measured with the neural response telemetry capabilities of Cochlear Corporation’s CI24M device. Ear Hear, (2001). 22, 289–299.
Franck K. H., Xu L., Pfingst B. E.. Effects of stimulus level on speech perception with cochlear prostheses. J Assoc Res Otolaryngol, (2003). 4, 49–59.
He S., Teagle H. F. B., Buchman C. A.. The electrically evoked compound action potential: From laboratory to clinic. Front Neurosci, (2017). 11, 339.
Holden L. K., Reeder R. M., Firszt J. B., et al. Optimizing the perception of soft speech and speech in noise with the Advanced Bionics cochlear implant system. Int J Audiol, (2011). 50, 255–269.
Holden L. K., Finley C. C., Firszt J. B., et al. Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear, (2013). 34, 342–360.
Holden L. K., Firszt J. B., Reeder R. M., et al. Evaluation of a new algorithm to optimize audibility in cochlear implant recipients. Ear Hear, (2019). 40, 990–1000.
Hughes M. L., Brown C. J., Abbas P. J., et al. Comparison of EAP thresholds with MAP levels in the nucleus 24 cochlear implant: Data from children. Ear Hear, (2000). 21, 164–174.
Hughes M. L., Vander Werff K. R., Brown C. J., et al. A longitudinal study of electrode impedance, the electrically evoked compound action potential, and behavioral measures in nucleus 24 cochlear implant users. Ear Hear, (2001). 22, 471–486.
James C. J., Skinner M. W., Martin L. F., et al. An investigation of input level range for the nucleus 24 cochlear implant system: Speech perception performance, program preference, and loudness comfort ratings. Ear Hear, (2003). 24, 157–174.
James C. J., Karoui C., Laborde M. L., et al. Early sentence recognition in adult cochlear implant users. Ear Hear, (2019). 40, 905–917.
Jones G. L., Won J. H., Drennan W. R., et al. Relationship between channel interaction and spectral-ripple discrimination in cochlear implant users. J Acoust Soc Am, (2013). 133, 425–433.
Kaandorp M. W., Smits C., Merkus P., et al. Assessing speech recognition abilities with digits in noise in cochlear implant and hearing aid users. Int J Audiol, (2015). 54, 48–57.
Kaandorp M. W., Smits C., Merkus P., et al. Lexical-access ability and cognitive predictors of speech recognition in noise in adult cochlear implant users. Trends Hear, (2017). 21, 2331216517743887.
Lazard D. S., Vincent C., Venail F., et al. Pre-, per- and postoperative factors affecting performance of postlinguistically deaf adults using cochlear implants: a new conceptual model over time. PLoS One, (2012). 7, e48739.
Loizou P. C., Dorman M., Fitzke J. The effect of reduced dynamic range on speech understanding: Implications for patients with cochlear implants. Ear Hear, (2000). 21, 25–31.
Long C. J., Holden T. A., McClelland G. H., et al. Examining the electro-neural interface of cochlear implant users using psychophysics, CT scans, and speech understanding. J Assoc Res Otolaryngol, (2014). 15, 293–304.
Netten A. P., Dekker F. W., Rieffe C., et al. Missing data in the field of otorhinolaryngology and head & neck surgery: Need for improvement. Ear Hear, (2017). 38, 1–6.
Pfingst B. E., Xu L. Across-site variation in detection thresholds and maximum comfortable loudness levels for cochlear implants. J Assoc Res Otolaryngol, (2004). 5, 11–24.
Pfingst B. E., Xu L. Psychophysical metrics and speech recognition in cochlear implant users. Audiol Neurootol, (2005). 10, 331–341.
Pfingst B. E., Xu L., Thompson C. S.. Across-site threshold variation in cochlear implants: Relation to speech recognition. Audiol Neurootol, (2004). 9, 341–352.
Rader T., Doms P., Adel Y., et al. A method for determining precise electrical hearing thresholds in cochlear implant users. Int J Audiol, (2018). 57, 502–509.
Saunders E., Cohen L., Aschendorff A., et al. Threshold, comfortable level and impedance changes as a function of electrode-modiolar distance. Ear Hear, (2002). 23(1 Suppl), 28S–40S.
Sherbecoe R. L., Studebaker G. A.. Supplementary formulas and tables for calculating and interconverting speech recognition scores in transformed arcsine units. Int J Audiol, (2004). 43, 442–448.
Skinner M. W.. Optimizing cochlear implant speech performance. Ann Otol Rhinol Laryngol Suppl, (2003). 191, 4–13.
Skinner M. W., Holden L. K., Holden T. A., et al. Comparison of two methods for selecting minimum stimulation levels used in programming the Nucleus 22 cochlear implant. J Speech Lang Hear Res, (1999). 42, 814–828.
Skinner M. W., Ketten D. R., Holden L. K., et al. CT-derived estimation of cochlear morphology and electrode array position in relation to word recognition in Nucleus-22 recipients. J Assoc Res Otolaryngol, (2002). 3, 332–350.
Smits C., Theo Goverts S., Festen J. M.. The digits-in-noise test: Assessing auditory speech recognition abilities in noise. J Acoust Soc Am, (2013). 133, 1693–1706.
Smoorenburg G. F., Willeboer C., van Dijk J. E.. Speech perception in nucleus CI24M cochlear implant users with processor settings based on electrically evoked compound action potential thresholds. Audiol Neurootol, (2002). 7, 335–347.
Spahr A. J., Dorman M. F.. Effects of minimum stimulation settings for the Med El Tempo+ speech processor on speech understanding. Ear Hear, (2005). 26(4 Suppl), 2S–6S.
Sterne J. A., White I. R., Carlin J. B., et al. Multiple imputation for missing data in epidemiological and clinical research: Potential and pitfalls. BMJ, (2009). 338, b2393.
Swanson B., Seligman P., Carter P. Impedance measurement of the Nucleus 22-electrode array in patients. Ann Otol Rhinol Laryngol Suppl, (1995). 166, 141–144.
Teoh S. W., Pisoni D. B., Miyamoto R. T.. Cochlear implantation in adults with prelingual deafness. Part I. Clinical results. Laryngoscope, (2004). 114, 1536–1540.
Vaerenberg B., De Ceulaer G., Szlávik Z., et al. Setting and reaching targets with computer-assisted cochlear implant fitting. ScientificWorldJournal, 2014a). 2014, 646590.
Vaerenberg B., Smits C., De Ceulaer G., et al. Cochlear implant programming: A global survey on the state of the art. ScientificWorldJournal, 2014b). 2014, 1–12.
van der Beek F. B., Briaire J. J., Frijns J. H.. Population-based prediction of fitting levels for individual cochlear implant recipients. Audiol Neurootol, (2015). 20, 1–16.
Vargas J. L., Sainz M., Roldan C., et al. Analysis of electrical thresholds and maximum comfortable levels in cochlear implant patients. Auris Nasus Larynx, (2013). 40, 260–265.
Wolfe J., Schafer E. Programming Cochlear Implants (2015). 2nd ed.). San Diego, CA: Plural Publishing, Inc.
Yukawa K., Cohen L., Blamey P., et al. Effects of insertion depth of cochlear implant electrodes upon speech perception. Audiol Neurootol, (2004). 9, 163–172.
Zeng F. G., Galvin J. J. 3rd.. Amplitude mapping and phoneme recognition in cochlear implant listeners. Ear Hear, (1999). 20, 60–74.
Zhou N., Pfingst B. E.. Effects of site-specific level adjustments on speech recognition with cochlear implants. Ear Hear, (2014). 35, 1,

REFERENCE NOTE

1. Cochlear. Cochlear Technology Center, Mechelen. Personal communication.
    Keywords:

    Cochlear implant; ECAP; Fitting parameters; Impedance; Multivariable linear regression; NRT thresholds; Speech recognition

    Supplemental Digital Content

    Copyright © 2019 The Authors. Ear & Hearing is published on behalf of the American Auditory Society, by Wolters Kluwer Health, Inc.