Objective: The purposes of this study were (a) to compare recognition of “real-world” music excerpts by postlingually deafened adults using cochlear implants and normal-hearing adults; (b) to compare the performance of cochlear implant recipients using different devices and processing strategies; and (c) to examine the variability among implant recipients in recognition of musical selections in relation to performance on speech perception tests, performance on cognitive tests, and demographic variables.
Design: Seventy-nine cochlear implant users and 30 normal-hearing adults were tested on open-set recognition of systematically selected excerpts from musical recordings heard in real life. The recognition accuracy of the two groups was compared for three musical genre: classical, country, and pop. Recognition accuracy was correlated with speech recognition scores, cognitive measures, and demographic measures, including musical background.
Results: Cochlear implant recipients were significantly less accurate in recognition of previously familiar (known before hearing loss) musical excerpts than normal-hearing adults (p < 0.001) for all three genre. Implant recipients were most accurate in the recognition of country items and least accurate in the recognition of classical items. There were no significant differences among implant recipients due to implant type (Nucleus, Clarion, or Ineraid), or programming strategy (SPEAK, CIS, or ACE). For cochlear implant recipients, correlations between melody recognition and other measures were moderate to weak in strength; those with statistically significant correlations included age at time of testing (negatively correlated), performance on selected speech perception tests, and the amount of focused music listening following implantation.
Conclusions: Current-day cochlear implants are not effective in transmitting several key structural features (i.e., pitch, harmony, timbral blends) of music essential to open-set recognition of well-known musical selections. Consequently, implant recipients must rely on extracting those musical features most accessible through the implant, such as song lyrics or a characteristic rhythm pattern, to identify the sorts of musical selections heard in everyday life.
There is a growing body of research regarding perception by cochlear implant (CI) recipients of computer-generated stimuli relevant to music (e.g., pure tone discrimination); however, few systematic studies exist regarding their perception of complex musical sounds heard in everyday life (e.g., CDs, radio, etc.). This study examines recognition of “real-world” music excerpts by adults CI recipients and a comparison group of normal-hearing adults. Performance by the CI recipients is examined as a function of device and processing strategy and is correlated with speech, cognitive, and demographic variables. Seventy-nine CI users and 30 normal-hearing adults were tested on open-set recognition of items from classical, country, and pop music styles. CI recipients were significantly less accurate than normal-hearing adults for all three styles. There were no significant differences by device or strategy. In summary, current-day implants do not effectively convey all salient features of music. Thus, recipients are required to extract those musical features most accessible, such as song lyrics or rhythm patterns, in order to identify music heard in everyday life.
School of Music (K.G., C.O., M.R., K.S.), Department of Speech Pathology and Audiology (K.G.), Iowa Cochlear Implant Research Center (K.G., J.F.K); Department of Psychology (J.F.K.) The University of Iowa; and Department of Otolaryngology (S.W., B.M.), University of Iowa Hospitals and Clinics, Iowa City, Iowa.
Address for correspondence: Kate Gfeller, Department of Otolaryngology, 200 Hawkins Drive, 212000 PFP, Iowa City, IA, 52242–1078.
Received November 3, 2003; accepted October 2, 2004