Journal Logo

Research Article

Development and Evaluation of a Language-Independent Test of Auditory Discrimination for Referrals for Cochlear Implant Candidacy Assessment

Ching, Teresa Y.C.1,2; Dillon, Harvey1,2,3; Hou, Sanna1; Seeto, Mark1; Sodan, Ana1,4,5; Chong-White, Nicky1

Author Information
doi: 10.1097/AUD.0000000000001166
  • Open

Abstract

INTRODUCTION

Cochlear implants (CIs) are medical devices that bypass the sensory function of the damaged inner ear to electrically stimulate auditory neurons for improving sound detection and speech perception in people with severe to profound hearing loss who receive limited benefits from hearing aids (HAs) (Gates & Mills 2005; Yawn et al. 2015). Despite the substantial speech perception benefits provided by cochlear implantation (Gaylor et al. 2013; Jolink et al. 2016), the rate of utilization of CIs in adults has been estimated to be less than 10% of adults with severe or profound hearing loss (Access Economics 2006; Sorkin & Buchman 2016). Recent research suggests that healthcare professionals’ limited awareness and uncertainties about CI candidacy criteria (Buchman et al. 2020) and variabilities in eligibility criteria were some of the major barriers to referrals for CI candidacy assessment (Looi et al. 2017; Bierbaum et al. 2020).

Current criteria for CI referrals for adults are generally based on audiometric thresholds and speech recognition performance with HAs, among other considerations. With advances in surgical techniques and CI technology, the criteria for referrals have expanded and evolved with considerable variations across clinics (Carlson et al. 2018) and countries (Vickers, De Raeve, et al. 2016). In a survey of 28 respondents from 17 countries in Africa (South Africa), Australia/Oceania, Europe, North America, and South America (Vickers, De Raeve, et al. 2016), 70% indicated that audiometric criteria were applied with thresholds varying from >70 dB HL at frequencies above 1.5 kHz to >90 dB HL at 2 and 4 kHz bilaterally. Further, 85% of respondents reported the use of speech-based criteria, with 40% using word tests, 24% sentence tests, and 36% a combination of both. Fifty-nine percent of the speech tests were conducted in quiet, and the remaining reported testing in quiet and in noise. The recommended speech test battery varies across CI manufacturers, including sentences from the AzBio Test (Spahr & Dorman 2004; Spahr et al. 2012), Hearing in Noise Test or HINT (Nilsson et al. 1994), the Bamford-Kowal-Bench Speech-in-Noise Test, or the Bamford-Kowal-Bench (BKB) test (Bench et al. 1979); and words from the Consonant-Vowel Nucleus-Consonant Monosyllabic Test or the CNC word test (Peterson & Lehiste 1962). Use of one or more of these tests forms the basis of the minimum speech test battery for evaluation of CI candidacy in the United States (http://www.auditorypotential.com/MSTBfiles/MSTBManual2011-06-20%20.pdf). Currently, the speech performance criteria for referrals vary across manufacturers from ≤ 40% to 50% or 60% correct for sentence perception (Gifford, 2013, Chapter 1, p.6). The criteria also vary across regulatory authorities (Wolfe 2020, Chapter 5, p.117–149).

As the use of language-based tests has good face validity, they are valuable for measuring the functional use of hearing for speech understanding by a hearing-impaired listener. However, speech perception performance in language-based tests is influenced by many factors including top-down linguistic and neurocognitive processes that do not necessarily relate to the listener’s peripheral hearing abilities (e.g., Kilman et al. 2014; Moberly et al. 2018). Kilman et al. (2014) showed that speech reception thresholds (SRTs) for understanding speech in noise was better (SRTs improved in signal to noise ratio or SNR) with an increase in the listener’s proficiency in the language of the test. Moberly et al. (2018) reported that measures of sentence recognition in babble noise in 31 post-lingually deafened adult CI candidates were predicted significantly by scores of working memory capacity after accounting for audiometric hearing loss.

For a listener to correctly repeat sentences partially masked by noise, they must be capable of understanding the sentences presented. This in turn requires them to be highly familiar with the language and phrases from which the presented sentences are derived. The test material, therefore, needs to be presented in the primary language of communication for the listener. When a test developed in one language is adapted into different languages, this affects the inherent difficulty of the test for different listeners, since in certain languages, phrases and words may be easier to comprehend in the presence of noise than others. As the test is not exactly the same for subjects who speak different languages, the difficulty of the test is subject to variation. For instance, the Hearing in Noise Test (HINT) is available in at least 14 different languages (Nilsson et al. 1994; Soli & Wong 2008) with normative SRTs for the HINT test varying across languages (e.g., Wong et al. 2007). In a similar vein, the Digit Triplet Test (Smits et al. 2004) originally developed for self-assessment via telephone has now been adapted in about 15 different languages (Denys et al. 2018). Furthermore, performance on a language-based test depends on the subjects’ ability to use syntactic knowledge and semantic knowledge to make inferences about the identity of individual words or parts of words that have been masked by background noise, and on working memory capabilities (O’Neill et al. 2019). These abilities vary among people, hence affecting the score on the test, yet are not relevant to the relative effectiveness of CIs and HAs, as the person tested has the same level of these abilities no matter which type of device they are wearing. In addition, a health professional/audiologist is required to conduct the assessment. Even though software for administering speech tests is available, the professional would still need to assess and score the subject’s ability to repeat the sentences heard.

Previous attempts to address these limitations have included the development of automated hearing test systems capable of performing several tests, including a test of speech discrimination. In the test, a listener is presented with words at an audible level and is required to select from words either graphically represented in writing or as a picture. For example, the word “horse” may be presented acoustically, and several pictures including one with the image of a horse may be presented visually, and the listener is asked to select the image that represents the word heard. The test eliminates the need for a hearing professional to either perform or attend the test. However, the test does not address any of the language-dependent issues identified above. Specifically, a different test for each language spoken by the person taking the test is still required, the relative difficulty of the test may still be affected by the choice of language, and the test score may be affected by the listener’s ability to use phonological, semantic and syntactic knowledge in the language.

Others have utilized a hearing test online that presented meaningless syllables called “logatomes” in fluctuating interference noise (Rahne et al. 2010). After a logatome is presented, a listener is required to identify the syllable heard by selecting from a range of graphically presented meaningless syllables displayed on a monitor or similar device; or reproduce the sound heard so that it may be registered by speech recognition for scoring. Although this approach addresses issues identified with language-specific hearing tests, the listener is still required to verbally repeat the speech sound heard. The accuracy of the test is necessarily subject to the production ability of the listener, the accuracy of the speech recognition software, and the functional capability of the software to recognize the speech of different users with varying first languages and accents. If the test requires the user to identify and select a graphically presented written symbol, the test cannot quickly and simply be applied to any language, since languages around the world use different orthographic systems. Despite moving away from a test requiring knowledge of a specific language, the test will still require review and modification to transfer between languages. Lastly, to identify and select different syllables, the test would require the user to be able to read, or at least to unambiguously associate sounds with symbols representing them.

Therefore, there is a need to devise a method for assessing the auditory capacity of a listener to discriminate between speech sounds in a language-neutral manner that does not require a hearing professional to be present during testing and to score responses. Further, it would be valuable for the test results to provide an estimate of the probability that the listener will score higher with a CI than with HAs. The primary objective of this study was to develop and validate a language-independent test of speech sound discrimination. The test measures the ability to detect differences (i.e., discriminate) between broadly similar speech sounds. By identifying and including sounds that are common to most of the world’s languages, the test potentially can be applied in most countries, irrespective of the language spoken by the patient being tested. Accordingly, this study aimed to:

  1. Develop a language-independent test of auditory discrimination (LIT-AD) to be self-administered via a game-based software program and
  2. Examine the relationship between the scores for the new discrimination test and those of a standard sentence test for adults using HAs or CIs.

MATERIALS AND METHODS

The LIT-AD is based on measuring the ability of hearing-impaired people to correctly discriminate pairs of nonsense syllables, presented as sequential triplets in an odd-one-out format. The syllables are in a vowel–consonant–vowel (VCV) format to maximize the availability of acoustic cues for the perception of the medial consonant. The test was designed as a game-based software program for implementation on a laptop computer with a touch screen. This study was approved by the local Institutional Review Board. All participants provided informed written consent. They were reimbursed for participation.

Stage 1: Stimuli selection and test creation

To address Aim 1, consonants were selected for combination with vowels. Discrimination testing was carried out between pairs of consonants for the purpose of excluding items that would be too difficult for CI users such that they may get close to chance scores even in the best listening condition. Testing was carried out at several SNRs to generate psychometric functions so that the correction in terms of SNR required to equalize difficulty across items could be determined. Finally, we used the averaged psychometric functions across consonant pairs to determine the SNR required to achieve 70% correct.

Initial stimuli selection

The speech sounds were selected by reviewing the inventory of vowels and consonants in the 40 most common languages in the world (“Phonemic Inventories and Cultural and Linguistic Information Across Languages, n.d., http://www.asha.org/practice/multicultural/Phono/”; “the Speech Accent Archive”; http://accent.gmu.edu/; and http://web.phonetik.uni-frankfurt.de/upsid.html). This was followed by a review of literature and information on confusion matrices in consonant identification tasks for adult listeners with typical hearing, those using HAs, and those using CIs, both in noise and in quiet listening conditions (Miller & Nicely 1955; Bilger & Wang 1976; Ching 2011; Incerti et al. 2011). The selected set of eight consonants comprised [p], [t], [k], [m], [n], [s], [l], and [j]. They occur in majority of the most common languages, span the places of articulation from bilabial to velar, and include manners of articulation encompassing plosive, nasal, fricative, lateral, and approximant (Ladefoged 1982, p.6-14). These consonants were combined with the vowels [i], [ɑ], and [o] (representing the high front, low back, and high back vowels in a vowel triangle) to form VCV syllables, in which V was one of the three vowels, and C was one of the eight consonants. A total of 84 possible consonant-pair vowel combinations were constructed (28 possible consonant pairs for discrimination in each of three vowel contexts).

Recording

Six tokens of each syllable were recorded by a native adult female speaker of standard Australian English who was an experienced speech pathologist. This was carried out in an anechoic chamber, using a 4155 microphone connected to a Brüel & Kjær sound level meter type 2230 and then to a sound card in a computer. The sampling rate for recording was 44.1 kHz. All recordings were checked to ensure that they were free from artifacts. The recorded tokens were reviewed by a panel that comprised two phoneticians and two audiologists. Clarity was determined by group vote. Two of the six recordings were selected for use in the test. All stimuli were high-pass filtered at 250 Hz to minimize the effect on discrimination performance of variations in low-frequency loudspeaker output were the test to subsequently be implemented in different tablet computers and mobile phones. The stimuli were equalized in the overall root-mean-square (RMS) level.

Test creation

The VCVs were presented sequentially in triplets, in noise-shaped according to the international long-term speech spectrum (Byrne et al. 1994). Noise began 0.5 sec before the first VCV and ended at least 0.1 sec after the third VCV. All 84 possible consonant–vowel combinations were constructed. A game-based software program was developed using MATLAB R2017a (version 9.2.0.538062) on the Windows operating system (Microsoft Windows 10 Enterprise Version 10.0) to present target sounds and response foils as sequential triplets in an odd-one-out format. The program was compiled to run on a laptop computer with a touch screen. In each triplet, the two foils were different productions of the same VCV syllable so that they were not identical. Nonidentical foils were used to maximize the likelihood that the acoustic characteristics used to identify the odd one out were those that contributed to its phonemic identity, rather than nonidentifying characteristics for which the target and foil happened to differ. The target item and the position in which it occurred were randomly selected. Each trial started automatically after the previous response was made, with a pause after 20 trials. As the test was designed for self-administration, on-screen instructions were provided. Speech tokens were presented acoustically, and a large button with an unidentified flying object was displayed with each token. Once all three tokens had been presented, the participant’s task was to select the button corresponding to the token that was perceived to have a different consonant sound (odd one out).

Participants

The participants were 50 adults (32 male, 18 female) using CIs. Forty-four of them had Cochlear Nucleus devices and the remaining participants had Med-EL devices. Participants were recruited by sending flyers and written information to CI centers and hearing centers in Sydney, Australia. Adults were invited to participate if they have used a CI for at least one year, do not have any known disabilities in addition to hearing loss, and use spoken English as their primary mode of communication. Table 1 shows the characteristics of participants.

TABLE 1. - Characteristics of participants in Stage 1.
Sex
 Female, n 18
 Male, n 32
Hearing device
 Bilateral CI, n 13
 CI+HA, n 28
 Unilateral CI, n 9
Hearing loss
 BE4FA*: Mean (SD) 80.1# dB HL (23.4)
Age at first CI
 Mean (SD) 65.4 years (13.5)
 Median 68.3
 Range 39.7–92.0
Duration of CI use
 Mean (SD) 5.6 years (4.4)
 Median 4
 Range 1.0–20.0
Age at assessment
 Mean (SD) 71.1 years (12.1)
 Median 72.5
 Range 43.6–93.8
* Better ear four-frequency average hearing loss in dB HL.
# This calculation included only the 28 participants who wore a hearing aid with a cochlear implant in opposite ears (CI+HA).

Test administration

Discrimination testing was carried out for the purpose of further selecting pairs of sounds from amongst the 84 combinations of consonant pairs and vowels, and for determining the SNR required for equalizing difficulty across each consonant pair and vowel combination.

Procedure

The test was presented via a laptop computer with a touch screen (Windows Surface Pro Tablet) in an acoustically treated booth. All participants were assessed using a CI in one ear only. Individual bilateral CI users chose the preferred ear for testing. For unilateral CI users who used HAs in the nonimplanted ear, they wore their personal HA that was switched off during testing. For all other participants, the ear contralateral to the test ear was plugged with silicone impression material. The volume control of the computer was initially set so that the output level at the position of the participant’s test ear was calibrated to be 65 dB SPL. The test started with a volume adjustment phase during which speech tokens in speech-shaped noise were presented. On-screen instructions were provided for the participant to adjust the volume to a comfortable listening level in 1-dB steps from −10 dB to +10 dB using an on-screen slider control. After the overall listening level was set, a training phase began. Pre-determined VCVs were presented in noise sequentially in triplets at 10 dB SNR, with each VCV accompanied by a pictorial object. The participant was asked to choose the odd one out by pressing the corresponding on-screen button. Three trials were used for training, which could be repeated as required. Response feedback was provided graphically on screen, with a correct response triggering the emergence of a new flying object, and an incorrect response resulting in an explosion of an existing flying object. Testing commenced after training was completed.

Testing in three listening conditions was carried out to generate psychometric functions. To determine the SNR conditions for testing, the first eight participants were assessed at −5 dB, 5 dB, and 15 dB SNR. It was apparent that −5 dB SNR was too difficult, and individuals varied in the SNR at which asymptotic performance was achieved. The next four were tested at 0 dB, 10 dB, and 20 dB SNR. All remaining participants were assessed at 0 dB, 7 dB SNR, and in Quiet (which was treated as 30 dB SNR in the analysis).

Data Analysis

The test software automatically scored the results. These were used to construct psychometric functions for each consonant pair in each vowel context averaged across participants by fitting logistic functions using a maximum likelihood criterion. These functions were used to derive the corrections in SNRs required to equalize the relative difficulty of each pair of consonant contrast in each vowel context. Based on the results, an SNR was chosen for a suitable subset of the consonant discrimination pairs such that averaged across the CI recipients, each pair selected was correct 70% of the time. The chosen set of consonant pair and vowel combination, each with its own SNR, constituted the final set of stimuli in the LIT-AD. The levels of the speech tokens were chosen so that each VCV token, combined with its individual masking noise level, has the same total RMS level.

Stage 2: Evaluation

To address Aim 2, the scores on the LIT-AD were compared to performance in a sentence test using BKB-like sentences (Bench et al. 1979) as stimuli for determining whether HAs or CIs will give the better performance in a language-based speech perception test. For a given LIT-AD score, estimates of the probability of better performance in either the LIT-AD score or the BKB SRT, with CI than with HAs were computed.

Participants

The participants were 81 hearing-impaired adults who have no known disabilities in addition to hearing loss, and who use spoken English as the primary mode of communication. These included 41 adults using HAs who have four-frequency-average hearing loss (4FA HL, average of hearing thresholds at 0.5, 1.0, 2.0, and 4.0 kHz in dB HL) of greater than 40 dB HL in the better ear, and 40 adults wearing Nucleus CIs with at least one year of CI experience. The age of participants ranged from 26 to 92 years. Table 2 gives the characteristics of participants. Figure 1 shows the distribution of 4FA HL in participants using HAs alone. The duration of HA experience ranged from one to 78 years. At the time of testing, 38 wore bilateral HAs (including four using Contralateral routing of signal or CROS-HAs) and three wore HAs in only one ear. Of those using unilateral HAs, two had profound loss in the contralateral ear, and one had a severe to profound hearing loss in the contralateral ear but had never worn a hearing device in that ear.

TABLE 2. - Characteristics of participants who wore hearing aids (HA) or cochlear implants (CI) in Stage 2
HA CI
Sex
 Female, n 16 20
 Male, n 25 20
Hearing loss
 BE4FA*: Mean (SD) 75.0 dB HL (20.8) 77.2# dB HL (22.8)
 Duration of loss: Mean (SD) 34.6 years (18.7) 32.2 years (18.8)
Hearing device
 Bilateral HA, n 34
 CROS HA, / 4
 Unilateral HA, n 3
 Bilateral CI, n 10
 CI+HA, n 23
 Unilateral CI, n 7
Age at first CI
 Mean (SD) 59.9 years (18.1)
 Median 62.7
 Range 4.8–89.1
Duration of use
 Mean (SD) 23.1 years (16.2) 8.1 years (7.6)
 Median 17.0 6.0
 Range 1.0–78.0 1.1–32.0
Age at assessment
 Mean (SD) 68.4 years (12.3) 68 years (13.8)
 Median 71.1 71.3
 Range 35.7–90.0 25.8–92.4
*Better ear four-frequency average hearing loss in dB HL.
#This calculation included only the 23 participants who wore a hearing aid with a cochlear implant in opposite ears.

F1
Fig. 1.:
Distribution of hearing threshold levels in participants who wore hearing aids. Hearing threshold levels were averaged across four-octave frequencies from 0.5 to 4 kHz, expressed in terms of dB HL.

Test generation

The LIT-AD excluded three consonant–vowel combinations based on results from Stage 1, and so comprised a total of 81 items. Each individual pair of syllables was treated as an entity, with a separate correction for SNR calculated for each vowel context, based on individual psychometric functions. The overall RMS level of all items (each VCV syllable pair combined with its individual masking noise level) was equalized.

Procedure

Before assessments, the hearing devices of the participants were checked, and batteries were replaced. For users of HAs, otoscopy and tympanometry were performed to exclude cases of middle ear dysfunction. Behavioral pure-tone thresholds were measured in both ears using standard pure-tone audiometry if an audiogram within 12 months of the assessment date was not available. For users of CIs, behavioral pure-tone thresholds were measured in the nonimplanted ear. All participants provided demographic information (age, sex, duration of hearing loss before implantation and age at implantation for users of CIs, duration of HA use, duration of CI use) by completing a written questionnaire.

Two assessments were performed with the participants using their hearing devices at personal settings: 1) A speech reception threshold (SRT) assessment with BKB-like sentences (Bench et al. 1979) developed for Australian use, conducted using an adaptive paradigm (Dawson et al. 2013) in an acoustically treated booth. Sentences were presented in babble noise via a loudspeaker located at 0-degree azimuth at a distance of 1 m from the participant at an overall level of 65 dB SPL. The babble was shaped according to the International Long-Term Average Speech Spectrum (Byrne et al. 1994). Before testing, calibration was completed at the participant position with the participant absent. Each participant completed one practice list before testing. The participant was required to repeat each of the sentences heard. The experimenter scored the responses online using a morpheme scoring method, and the software adaptively adjusted the noise level according to the participant’s responses. The noise level was adapted during test administration using a step size of 4 dB for the initial four sentences, and 2-dB step size for the remaining 12 sentences in a test run of 16 sentences. The noise level increased when the participant responded with more than 50% of morphemes correct and decreased when the participant failed to repeat 50% of morphemes correctly in the sentence. This provided an SRT measure, indicating the SNR at which the participant scored 50% of morphemes correctly. Each participant completed two runs. 2) The LIT-AD was completed using custom-designed software implemented on a laptop computer with a touch screen. The test was self-administered, with minimal input from the researcher. After written instructions were presented on screen, the participant was directed to listen to a sequence of concatenated stimuli and to adjust the overall level of presentation to a comfortable loudness level by adjusting an on-screen slider. This was followed by a practice run after which the participant initiated a test run. After every 20 trials, the participant could either take a brief break or press a button to continue testing until all trials were finished. Each participant completed two runs, each comprising 81 items. The order of presentation of the two tests was counterbalanced across participants. All assessments were completed within one test appointment, with breaks when required.

Data Analysis

Descriptive statistics were used to summarize performance scores. For each test, the reliability was examined using correlation analysis and test–retest differences, based on the first and second runs. The relationship between the results of the LIT-AD scores and the SRT for sentences was examined using correlation analyses. The difference in SRTs for sentences between individual HA users and the average CI user were calculated. This difference was related to the corresponding difference in LIT-AD scores using correlation analysis. To estimate the probability of better performance with CIs, multiple regression analysis was first performed to determine the dependence of LIT-AD scores on demographic variables including age, gender, duration of hearing loss, and duration of CI use. The probability that a non-CI user with any specific LIT-AD score would score higher with CIs was estimated using the scores obtained by our sample of CI users. The approach was essential to make a regression-based adjustment to the CI users’ scores for the effect of demographic characteristics, then to fit a distribution to the adjusted scores and obtain the required probability from the fitted distribution, using the assumption that the non-CI user’s adjusted score after implantation would come from that distribution. The distribution was fitted using logspline density estimation (Kooperberg & Stone 1991). Bootstrap resampling of the CI users’ scores was used to allow for sampling variability.Statistical analyses were performed using Statistica (version 13) and R (version 3.4.3; R Core Team 2017), with the additional R packages logspline (version 2.1.9; Kooperberg 2016), and ggplot2 (version 3.3.2; Wickham 2016).

RESULTS

Stage 1: Stimuli selection and test creation

Psychometric functions for each consonant-pair in each vowel context are shown in Figure 2. Three consonant-pairs ([iki] vs [iti]; [imi] vs [ini]; and [ɑnɑ] vs [ɑmɑ]) were excluded. This decision was made on the basis of these pairs having a combination of low discrimination probability in this experiment, and poor discrimination in previous research involving people with normal hearing (Miller & Nicely 1955; Wang & Bilger 1973) and/or hearing loss (Ching et al. 1998; Rødvik et al. 2018).

F2
Fig. 2.:
Psychometric functions for each consonant-pair in three vowel contexts ([i] in green, [ɑ] in red, and [o] in blue). In each panel, the size of the crosses that mark the data points is proportional to the number of participants.

Figure 3 shows psychometric functions for each consonant discrimination pair, averaged across vowel contexts and participant responses. A target test score for an average user of CI for this test was set to 70% correct, about midway between chance level (33%) and maximum level (100%). To equate the relative difficulty across items (consonant-pairs with vowel combinations), each individual discrimination pair, in each vowel context, was adjusted in level by the number of decibels by which the individual psychometric function differed from the overall psychometric function, while keeping the overall level at 65 dB. To generate the final set of stimuli for the LIT-AD, the overall RMS level of all items (each VCV token combined with its individual masking level) was equalized.

F3
Fig. 3.:
Psychometric functions for each consonant discrimination pair, averaged across vowel contexts and participant responses.

Stage 2: Evaluation

On average, individual adjustments of overall presentation level using the on-screen slider was 0.9 dB (SD: 2.1) for users of HAs, and 0.6 dB (SD: 1.1) for users of CIs. Tables 3 and 4 summarize the performance of users of HAs and users of CIs respectively for the LIT-AD in terms of percent correct score, and for the sentence test in terms of SRT in dB SNR. Repeatability was examined using product-moment correlation analyses for each test. There were significant correlations between the first and second runs of the LIT-AD for users of HAs (r = 0.88, p <0.001) and users of CIs (r = 0.91, p <0.001). Also, there were significant correlations between the two runs of the sentence test for users of HAs (r = 0.84, p <0.001) and users of CIs (r = 0.88, p <0.001).

TABLE 3. - Performance of 41 users of hearing aids (HAs) for the Language-Independent Test of Auditory Discrimination (LIT-AD) expressed as percent correct (%). The speech reception threshold (SRT) for achieving 50% correct in a sentence test is also shown, expressed in terms of signal to noise ratio (SNR) in decibel (dB). The overall score is the average of two test runs.
Test Retest Difference Overall Score
LIT-AD (%)
 Mean 71.5 73.2 −1.7 72.4
 SD 14.6 13.0 7.0 13.4
SRT (dB SNR)
 Mean 4.2 4.5 −0.3 4.4
 SD 2.7 3.1 1.7 2.8

TABLE 4. - Performance of 40 users of cochlear implants (CIs) for the Language-Independent Test of Auditory Discrimination (LIT-AD) was expressed as percent correct (%). The speech reception threshold (SRT) for achieving 50% correct in a sentence test is also shown, expressed in terms of signal to noise ratio (SNR) in decibel (dB). The overall score is the average of two test runs.
Test Retest Difference Overall Score
LIT-AD (%)
 Mean 79.5 80.8 −1.3 80.1
 SD 14.7 14.3 6.3 14.1
SRT (dB SNR)
 Mean 6.3 5.5 0.9 5.9
 SD 3.6 3.4 1.7 3.4

Figure 4 compares the mean discrimination scores of users of HAs to those of CIs for individual pairs of consonants. Users of HAs had major difficulties discriminating between plosives and fricatives ([p] vs [s]; [t] vs [s]; [k] vs [s]). This discrimination error relates to the manner of articulation, likely due to reduced audibility of frication and/or formant transitions occurring in the high-frequency spectrum, especially in the [i] context. On the other hand, users of HAs were slightly better than users of CIs at discriminating [p] from [t], [t] from [k], and [m] from [l], all in the [ɑ] context, possibly because they could better extract useful information from the first and second formant transitions in the low frequencies associated with the different place of articulation. In general, consonant discrimination in the [i] context poses the greatest difficulty for users of HAs. Because the second and third formants of [i] typically occur above 2500 Hz, formant transitions that form the acoustic basis for consonant distinctions would be much less audible for users of HAs than CIs. On average, discrimination scores for consonants presented in the [i] context were around 11% higher for users of CIs than those for users of HAs, but for consonants in the [ɑ] and [o] contexts, the scores were only around 5% higher.

F4
Fig. 4.:
Mean discrimination scores of users of hearing aids (HAs) compared to those of users of cochlear implants (CIs) for each consonant pair in three vowel contexts ([i] in green, [ɑ] in red, and [o] in blue.

Relationship between LIT-AD score and SRT for sentences

Figures 5 and 6 show the relationship between discrimination scores for the LIT-AD and the SRT for sentence perception at 50% correct; respectively for users of HAs and CIs. Product moment correlation analyses, based on the average of test and retest for each type of speech test, revealed that higher discrimination scores were associated with better sentence perception (lower SNR) for users of HAs (r = −0.54, p <0.001) and users of CIs (r = −0.73, p <0.001). These correlation coefficients are, of course, affected by measurement error in each speech perception score. The standard error of measurement in the average of test and retest (SEMavg) can be estimated as the SD of test–retest differences divided by 2. The correlations can be corrected for measurement error using rcorrected = robserved*√(varLIT.varSRT/((varLIT − SEM2avg LIT).(varSRT − SEM2avg SRT)), where for each test, var is the observed inter-participant variance of the scores formed by averaging the test and retest scores. The corrected correlations of the two tests were 0.59 and 0.77 for the HA and CI users respectively. Note that although these are reasonably high error-corrected correlations between the two tests, especially for the CI users, the regression line for the CI users (Fig. 6) is offset relative to that for HA users (Fig. 5). That is, the relationship between the two tests is not the same for each group of device users.

F5
Fig. 5.:
LIT-AD scores in relation to SRT for sentences for users of hearing aids (HAs). The LIT-AD scores are expressed in terms of percent correct, and the Speech Reception Threshold for sentence perception is expressed in terms of dB signal to noise ratio or SNR. The solid line shows the line of best fit, and the dashed lines depict 95% confidence intervals.
F6
Fig. 6.:
LIT-AD scores in relation to SRT for sentences for users of cochlear implants (CIs). The LIT-AD scores are expressed in terms of percent correct, and the Speech Reception Threshold for sentence perception is expressed in terms of dB signal to noise ratio or SNR. The solid line shows the line of best fit, and the dashed lines depict 95% confidence intervals.

To relate the performance of HA users to that of CI users, the difference between individual HA users and the mean performance of the CI users in SRTs for sentences was related to the corresponding difference in LIT-AD scores. Figure 7 shows that every HA-wearing individual who scored higher than an average CI user on the LIT-AD also had better sentence perception in noise. However, the converse was not true. Some individuals who scored more poorly than the average CI user on the LIT-AD also scored more poorly on the sentence perception test, but others scored better than the average CI user on the sentence perception test. This asymmetry is partly the consequence of the difference between the position of the regression lines shown in Figures 5 and 6. For HA wearers (Fig. 5), 70% correct on LIT-AD corresponds, on average, to a sentence perception SRT of 5.3 dB; whereas for CI wearers (Fig. 6), 70% on the LIT-AD corresponds to a sentence perception SRT of 9.2 dB. Thus, the target LIT-AD score corresponds to different sentence scores for each group. Figure 7 shows that, on average, when the HA and CI scores were equal on the sentence perception test, the LIT-AD score was 12 percentage points lower for the HA wearers than for the CI wearers.

F7
Fig. 7.:
Comparison of HA scores with mean CI scores. Data points in the lower-left quadrant depict users whose LIT-AD scores expressed in terms of percent correct (%) were lower than the mean CI score, but whose sentence perception expressed in terms of signal to noise ratio (SNR) was better than the average CI user. Data points in the lower right quadrant depict HA users whose LIT-AD scores and sentence perception were poorer than the average CI user. The top left quadrant depicts users of HAs whose LIT-AD scores and sentence perception were better than the average CI user.

Estimating the probability of scoring higher with CI

The multiple regression analysis using the transformed LIT-AD score as a dependent variable, and age, gender, duration of hearing loss, and experience with CI as independent variables revealed no significant effect of the variables at the 5% significance level, but in view of the relatively small sample size, this was not surprising. The estimated effect of duration of hearing loss was very small (b = −0.00192, p = 0.11), with greater duration being associated with lower LIT-AD scores. Nevertheless, we adjusted for duration of hearing loss in our model, based on previous studies on factors influencing speech recognition performance of users of CI (Blamey et al. 2013; Holden et al. 2013; Dowell et al. 2016; Kitterick & Lucas 2016; Kumar et al. 2016). Figure 8 shows the estimated probability of scoring higher on the LIT-AD with CIs for different LIT-AD scores obtained with HAs. It also shows the probability of scoring higher on the LIT-AD with an implant by at least 12 percentage points. Similar plots for durations of deafness from 10 to 30 years differed from these lines by less than 5 percentage points, and so are not shown.

F8
Fig. 8.:
Probability of scoring higher in LIT-AD with cochlear implants (CIs), by either any amount (solid line), or by at least 11.7 percentage points (broken line), for non-CI users who had 20 years of hearing loss.

DISCUSSION

The first aim of this study was to develop a language-independent test of auditory discrimination between speech sounds so that people with hearing loss who derive limited benefits from HAs may be identified for consideration of cochlear implantation. The test was based on measuring the ability of hearing-impaired people to correctly discriminate pairs of nonsense syllables, presented as sequential triplets in an odd-one-out format and implemented as a game-based software program. By comparing a given score for a non-CI user to the scores of a sample of CI users, we estimated the probability that a person with a certain LIT-AD score would achieve better performance with CIs.

To achieve the first aim, stimuli were carefully selected to include consonants that occur in the most common languages in the world. The consonants comprised [p], [t], [k], [m], [n], [s], [l], and [j]; combined with a high front vowel [i], a low back vowel [ɑ] or a high back vowel [o] to form VCV syllables. Based on psychometric functions from 50 users of CIs, 81 consonant-pairs with vowel combinations were selected. Individual tokens with the associated masking noise levels were adjusted so that all items were equalised in difficulty.

The second aim was to evaluate the validity of the test by examining the relationship between the scores for the new discrimination test and those of a standard sentence test. To achieve this aim, 40 CI users and 41 HA users completed the LIT-AD and a standard sentence test in noise. There was good test–retest reliability for both tests (Tables 3 and 4). On average, the LIT-AD and the sentence test results were significantly correlated, suggesting that those who did better in the LIT-AD also achieved better sentence perception in noise. The correlation for CI users (r = 0.73 before correction for the effects of measurement error) was similar to the correlation between results of the Digit Triplet test and BKB sentence perception in noise (r = 0.76) (Cullington & Aidi 2017). Figure 7; however, shows that some users of HAs who scored lower in the auditory discrimination test than the average CI user achieved better sentence perception performance. Partly this is caused by random measurement error, but partly also by a systematic effect that is evident in Figure 7, and from a comparison of Figure 5 with Figure 6. Sentence perception is assisted by low-frequency prosodic cues in continuous discourse which are of less assistance in differentiating between two consonants in the test. It seems possible that changing from HAs to a CI could simultaneously improve consonant discrimination while worsening the perception of prosodic cues. Thus, on average, sentence perception would not improve by the amount expected on the basis of considering the effect on consonant perception alone.

We need to remember that people understand sentence material partly on the basis of contextual cues that take advantage of the listener’s knowledge of the world and the language; and partly on the basis of their auditory abilities in hearing and processing the acoustic cues of speech. Whereas the former is inherent in individual listeners whether they use HAs or change to CIs, the latter is contingent upon the acoustic cues made accessible by the hearing device used. Test material with low redundancy and low context such as that used in the LIT-AD assesses the listener’s auditory ability to perceive acoustic cues for speech sound discrimination. As such, the assessment is well suited to determining the potential effect of a change of hearing device on accessibility to speech cues, a primary goal of assessing speech performance for referrals for CI candidacy evaluation (Vickers, Riley, et al. 2016; Cullington & Aidi 2017). By comparing how a listener who uses HAs scores relative to members of the population who use CIs, the likelihood of improved speech sound discrimination with CIs if the listener were to use a CI can be estimated. For example, Figure 8 shows that a person who achieved a 60% correct LIT-AD score while wearing HA has a 93% probability of achieving better discrimination with a CI. Similarly, there is an 81% probability that the LIT-AD score will increase by 12 percentage points or greater, which on average corresponds to an improvement in sentence perception in noise. Information about the predicted chance of benefitting from a CI has been identified as crucial to candidacy criteria (UK Cochlear Implant Study Group 2004), and recommendations based on mono-syllabic word tests in English have been published (Dowell et al. 2004; Doran & Jenkinson 2016; Leigh et al. 2016). It has been suggested that a probability of 70 to 80% of receiving greater benefit from CI than from HAs based on word test scores might warrant referral (Kitterick & Vickers 2017; National Institute for Health and Care Excellence 2019). The LIT-AD extends this initiative to assist healthcare providers and patients in providing information relating to prognostics and referrals for candidacy in circumstances where an English word test may not be applicable. The choice of cutoff points for referral would likely be dependent on considerations of the balance of risks and benefits of cochlear implantation, which might be different for patients, healthcare professionals, and healthcare payers.

As the LIT-AD has been designed to be language-independent, the test could potentially be used as a screening tool to identify individuals who obtain limited benefits from HAs for speech discrimination, regardless of language background and language proficiency. The test is implemented as a game-based software for self-administration on a tablet computer, and it takes about 10 to 15 min to complete. The results support users and healthcare providers in making decisions about referrals by providing information regarding an individual’s auditory abilities and estimating likely benefits from CI thereby addressing the present issue of under-identification (Buchman et al. 2020). By using the LIT-AD to screen adults with hearing loss for referrals across regions of diverse languages, it would also be possible to collate data globally to inform healthcare providers about patient populations and services required.

Also, the LIT-AD can be used as part of a test battery to determine the relative effectiveness of CIs for individual patients by comparing pre-implant with post-implant scores. A requisite for such testing is the ability to repeat tests on each listener. The LIT-AD is well suited to this application because it has high test–retest reliability (r ≥ 0.9; see Tables 3 and 4). By selecting consonants from the most common languages, the LIT-AD has the potential for adoption across regions of diverse languages. When used as part of pre- and post-implant assessments across countries, global data on CI benefits for access to speech cues can be collated to inform clinical services.

Limitations and future work

We reported performance of adults who were post-lingually deafened and those who were early-deafened but received late intervention. As such, the results could not be generalized to those with congenital hearing loss that received early amplification or cochlear implantation.

Second, the model reported in this study for estimating the probability of potential benefits with CIs in each of syllable discrimination and sentence perception provides current best estimates based on available data. The probability estimates only considered the influence of duration of hearing loss in regression analyses (and which changed the probability of improvement by only a very small amount), and do not account for other factors that might influence outcomes with CIs. Future studies may incorporate other information to improve the calculation accuracy of the probability of CIs improving speech discrimination ability (e.g., Debruyne et al. 2020). The accuracy of the currently estimated probabilities of scoring higher with CIs than with HAs need to be checked and fine-tuned in future studies of large groups of patients who have LIT-AD scores before and after receiving CIs. Also, two different probabilities are available. The probability of improving the LIT-AD score may be most relevant when the task is to identify words when there are similar-sounding words that would also make sense in the communication. The probability of improving the LIT-AD score by a larger amount (12 percentage points or more, corresponding to sentence scores also increasing) may be most relevant when trying to understand whole sentences, complete with their rich context and prosody cues. It would be useful to perform further research examining in more detail reasons why the relative effectiveness of CIs and HAs is different when assessed with sentence material than with nonsense syllables.

Third, the present study reported the evaluation of the LIT-AD in English-speaking adult listeners who enrolled in a research study. Future studies will be required to evaluate the test for use by speakers of other languages and patients in clinical settings. Even though the LIT-AD has been designed for self-administration, some patients may need support to complete the test. Furthermore, it may also be useful to adapt the game-based test for use by school-aged children.

In addition, the LIT-AD lends itself for use in tailoring postoperative rehabilitation approaches to monitor and optimize performance of individuals with CIs. The software automatically scores the responses and a report on a list of confusable consonants after each test is completed. This information can be used by healthcare professionals to assist with setting parameters (or mapping) of the CI, and to form a basis for auditory training. Further extensions of the LIT-AD application may include scoring for identification as well as discrimination, so that the test can be used for auditory training and for evaluation of rehabilitation programs to optimize performance.

Conclusions

We described the development of a language-independent test of auditory discrimination implemented as a game-based software for self-administration by adults. The test scores were significantly correlated with performance in a standard sentence perception test. The LIT-AD scores were used to estimate the probability of superior performance with cochlear implantation. Future validation with speakers of languages other than English will facilitate the use of the test in different countries and communities. This validated test can assist with increasing access to CIs by screening for those who obtain limited benefits from HAs to facilitate timely referrals for candidacy evaluation; and providing patients and professionals with practical information about the probability of potential benefits from CIs for auditory discrimination of speech sounds.

Acknowledgments

We are grateful to all the participants and their families for participation in this study. We thank CICADA and the staff at the Sydney Cochlear Implant Centre, Hearing Australia, Norwest Hearing for their support. We also thank Annette Smith at Northside Audiology Bella Vista Clinic and Celene McNeill at Healthy Hearing and Balance Care for their support for this study.

    REFERENCES

    Access Economics. (2006). Listen Hear! The Economic Impact and Cost of Hearing Loss in Australia: a Report. Access Economics Pty Ltd.
    Bench J., Kowal A., Bamford J. (1979). The BKB (Bamford-Kowal-Bench) sentence lists for partially-hearing children. Br J Audiol, 13, 108–112.
    Bierbaum M., McMahon C. M., Hughes S., Boisvert I., Lau A. Y. S., Braithwaite J., Rapport F. (2020). Barriers and facilitators to cochlear implant uptake in Australia and the United Kingdom. Ear Hear, 41, 374–385.
    Bilger R. C., Wang M. D. (1976). Consonant confusions in patients with sensorineural hearing loss. J Speech Hear Res, 19, 718–748.
    Blamey P., Artieres F., Başkent D., Bergeron F., Beynon A., Burke E., Dillier N., Dowell R., Fraysse B., Gallégo S., Govaerts P. J., Green K., Huber A. M., Kleine-Punte A., Maat B., Marx M., Mawman D., Mosnier I., O’Connor A. F., O’Leary S., et al. (2013). Factors affecting auditory performance of postlinguistically deaf adults using cochlear implants: An update with 2251 patients. Audiol Neurootol, 18, 36–47.
    Buchman C. A., Herzog J. A., McJunkin J. L., Wick C. C., Durakovic N., Firszt J. B., Kallogjeri D.; CI532 Study Group. (2020). Assessment of speech understanding after cochlear implantation in adult hearing aid users: A nonrandomized controlled trial. JAMA Otolaryngol Head Neck Surg, 146, 916–924.
    Byrne D., Dillon H., Tran K., et al. (1994). An international comparison of long-term average speech spectra. J Acoust Soc Am, 96, 2108–2120.
    Carlson M. L., Sladen D. P., Gurgel R. K., Tombers N. M., Lohse C. M., Driscoll C. L. (2018). Survey of the American Neurotology Society on Cochlear Implantation: Part 1, Candidacy Assessment and Expanding Indications. Otol Neurotol, 39, e12–e19.
    Ching T. Y., Dillon H., Byrne D. (1998). Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am, 103, 1128–1140.
    Ching T. Y. C. (2011). Acoustic cues for consonant perception with combined acoustic and electric hearing in children. Semin Hear, 32, 032–041.
    Cullington H. E., Aidi T. (2017). Is the digit triplet test an effective and acceptable way to assess speech recognition in adults using cochlear implants in a home environment? Cochlear Implants Int, 18, 97–105.
    Dawson P. W., Hersbach A. A., Swanson B. A. (2013). An adaptive Australian Sentence Test in Noise (AuSTIN). Ear Hear, 34, 592–600.
    Debruyne J. A., Janssen A. M., Brokx J. P. L. (2020). Systematic review on late cochlear implantation in early-deafened adults and adolescents: Predictors of performance. Ear Hear, 41, 1431–1441.
    Denys S., Hofmann M., Luts H., Guérin C., Keymeulen A., Van Hoeck K., van Wieringen A., Hoppenbrouwers K., Wouters J. (2018). School-Age hearing screening based on speech-in-noise perception using the digit triplet test. Ear Hear, 39, 1104–1115.
    Doran M., Jenkinson L. (2016). Mono-syllabic word test score as a pre-operative assessment criterion for cochlear implant candidature in adults with acquired hearing loss. Cochlear Implants Int, 17(Suppl 1), 13–16.
    Dowell R., Galvin K., Cowan R. (2016). Cochlear implantation: Optimizing outcomes through evidence-based clinical decisions. Int J Audiol, 55(Suppl 2), S1–2.
    Dowell R. C., Hollow R., Winton E. (2004). Outcomes for cochlear implant users with significant residual hearing: Implications for selection criteria in children. Arch Otolaryngol Head Neck Surg, 130, 575–581.
    Gates G. A., Mills J. H. (2005). Presbycusis. Lancet, 366, 1111–1120.
    Gaylor J. M., Raman G., Chung M., Lee J., Rao M., Lau J., Poe D. S. (2013). Cochlear implantation in adults: A systematic review and meta-analysis. JAMA Otolaryngol Head Neck Surg, 139, 265–272.
    Gifford R. H. (2013). Cochlear implant patient assessment: Evaluation of candidacy, performance, and outcomes. Plural Publishing, Inc.
    Holden L. K., Finley C. C., Firszt J. B., Holden T. A., Brenner C., Potts L. G., Gotter B. D., Vanderhoof S. S., Mispagel K., Heydebrand G., Skinner M. W. (2013). Factors affecting open-set word recognition in adults with cochlear implants. Ear Hear, 34, 342–360.
    Incerti P. V., Ching T. Y., Hill A. (2011). Consonant perception by adults with bimodal fitting. Semin Hear, 32, 090–102.
    Jolink C., Helleman H. W., van Spronsen E., Ebbens F. A., Ravesloot M. J., Dreschler W. A. (2016). The long-term results of speech perception in elderly cochlear implant users. Cochlear Implants Int, 17, 146–150.
    Kilman L., Zekveld A., Hällgren M., Rönnberg J. (2014). The influence of non-native language proficiency on speech perception performance. Front Psychol, 5, 651.
    Kitterick P., Vickers D. (2017). Derivation of a candidacy criteria for sufficient benefit from HAs: an analysis of the BCIG service evaluation. Technical report prepared for the British Cochlear Implant Group. British Cochlear Implant Group.
    Kitterick P. T., Lucas L. (2016). Predicting speech perception outcomes following cochlear implantation in adults with unilateral deafness or highly asymmetric hearing loss. Cochlear Implants Int, 17(Suppl 1), 51–54.
    Kooperberg C. (2016). logspline: Routines for Logspline Density Estimation. https://cran.r-project.org/package=logspline.
    Kooperberg C., Stone C. J. (1991). A study of logspline density estimation. Comput Stat Data Anal, 12, 327–347.
    Kumar R. S., Mawman D., Sankaran D., Melling C., O'Driscoll M., Freeman S. M., Lloyd S. K. W. (2016). Cochlear implantation in early deafened, late implanted adults: Do they benefit? Cochlear Implants Int, 17(Suppl 1), 22–25.
    Ladefoged P. (1982). A Course in Phonetics (2nd ed.). Harcourt Brace Jovanovich, Inc.
    Leigh J. R., Moran M., Hollow R., Dowell R. C. (2016). Evidence-based guidelines for recommending cochlear implantation for postlingually deafened adults. Int J Audiol, 55(Suppl 2), S3–8.
    Looi V., Bluett C., Boisvert I. (2017). Referral rates of postlingually deafened adult hearing aid users for a cochlear implant candidacy assessment. Int J Audiol, 56, 919–925.
    Miller G. A., Nicely P. E. (1955). An analysis of perceptual confusions among some English consonants. J Acoust Soc Am, 27, 338–352.
    Moberly A. C., Castellanos I., Mattingly J. K. (2018). Neurocognitive Factors Contributing to Cochlear Implant Candidacy. Otol Neurotol, 39, e1010–e1018.
    National Institute for Health and Care Excellence. (2019). Cochlear implants for children and adults with severe to profound deafness. (Technology Appraisal Guidance TA566). Retrieved October 17, 2020. http://www.nice.org.uk/guidance/ta566.
    Nilsson M., Soli S. D., Sullivan J. A. (1994). Development of the Hearing in Noise Test for the measurement of speech reception thresholds in quiet and in noise. J Acoust Soc Am, 95, 1085–1099.
    O’Neill E. R., Kreft H. A., Oxenham A. J. (2019). Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. J Acoust Soc Am, 146, 195.
    Peterson G. E., Lehiste I. (1962). Revised CNC lists for auditory tests. J Speech Hear Disord, 27, 62–70.
    R Core Team. (2017). R: A Language and Environment for Statistical Computing. https://www.r-project.org.
    Rahne T., Ziese M., Rostalski D., Mühler R. (2010). Logatome discrimination in cochlear implant users: Subjective tests compared to the mismatch negativity. ScientificWorldJournal, 10, 329–339.
    Rødvik A. K., von Koss Torkildsen J., Wie O. B., Storaker M. A., Silvola J. T. (2018). Consonant and vowel identification in cochlear implant users measured by nonsense words: A systematic review and meta-analysis. J Speech Lang Hear Res, 61, 1023–1050.
    Smits C., Kapteyn T. S., Houtgast T. (2004). Development and validation of an automatic speech-in-noise screening test by telephone. Int J Audiol, 43, 15–28.
    Soli S. D., Wong L. L. (2008). Assessment of speech intelligibility in noise with the Hearing in Noise Test. Int J Audiol, 47, 356–361.
    Sorkin D. L., Buchman C. A. (2016). Cochlear implant access in six developed countries. Otol Neurotol, 37, e161–e164.
    Spahr A. J., Dorman M. F. (2004). Performance of subjects fit with the Advanced Bionics CII and Nucleus 3G cochlear implant devices. Arch Otolaryngol Head Neck Surg, 130, 624–628.
    Spahr A. J., Dorman M. F., Litvak L. M., Van Wie S., Gifford R. H., Loizou P. C., Loiselle L. M., Oakes T., Cook S. (2012). Development and validation of the AzBio sentence lists. Ear Hear, 33, 112–117.
    UK Cochlear Implant Study Group. (2004). Criteria of candidacy for unilateral cochlear implantation in postlingually deafened adults III: Prospective evaluation of an actuarial approach to defining a criterion. Ear Hear, 25, 361–374.
    Vickers D., De Raeve L., Graham J. (2016). International survey of cochlear implant candidacy. Cochlear Implants Int, 17(Suppl 1), 36–41.
    Vickers D., Riley A., Ricaud R., Verschuur C., Cooper S., Nunn T., Webb K., Muff J., Harris F., Chung M., Humphries J., Langshaw A., Poynter-Smith E., Totten C., Tapper L., Ridgwell J., Mawman D., de Estibariz UM., O'Driscoll M., George N., et al. (2016). Preliminary assessment of the feasibility of using AB words to assess candidacy in adults. Cochlear Implants Int, 17(Suppl 1), 17–21.
    Wang M. D., Bilger R. C. (1973). Consonant confusions in noise: A study of perceptual features. J Acoust Soc Am, 54, 1248–1266.
    Wickham H. (2016). ggplot2: elegant graphics for data analysis (2nd ed.). Springer.
    Wolfe J. (2020). Cochlear implants: Audiologic management and considerations for implantable hearing devices. Plural Publishing.
    Wong L. L., Soli S. D., Liu S., Han N., Huang M. W. (2007). Development of the Mandarin Hearing in Noise Test (MHINT). Ear Hear, 28(2 Suppl), 70S–74S.
    Yawn R., Hunter J. B., Sweeney A. D., Bennett M. L. (2015). Cochlear implantation: A biomechanical prosthesis for hearing loss. F1000Prime Rep, 7, 45.
    Keywords:

    Cochlear implant candidacy; Language-independent Test; Speech sound discrimination; Screening

    Copyright © 2021 The Authors. Ear & Hearing is published on behalf of the American Auditory Society, by Wolters Kluwer Health, Inc.