Secondary Logo

Share this article on:

What Does Music Sound Like for a Cochlear Implant User?

Jiam, Nicole T.*,†; Caldwell, Meredith T.; Limb, Charles J.§

doi: 10.1097/MAO.0000000000001448

Objective: Cochlear implant research and product development over the past 40 years have been heavily focused on speech comprehension with little emphasis on music listening and enjoyment. The relatively little understanding of how music sounds in a cochlear implant user stands in stark contrast to the overall degree of importance the public places on music and quality of life. The purpose of this article is to describe what music sounds like to cochlear implant users, using a combination of existing research studies and listener descriptions. We examined the published literature on music perception in cochlear implant users, particularly postlingual cochlear implant users, with an emphasis on the primary elements of music and recorded music. Additionally, we administered an informal survey to cochlear implant users to gather first-hand descriptions of music listening experience and satisfaction from the cochlear implant population.

Conclusion: Limitations in cochlear implant technology lead to a music listening experience that is significantly distorted compared with that of normal hearing listeners. On the basis of many studies and sources, we describe how music is frequently perceived as out-of-tune, dissonant, indistinct, emotionless, and weak in bass frequencies, especially for postlingual cochlear implant users—which may in part explain why music enjoyment and participation levels are lower after implantation. Additionally, cochlear implant users report difficulty in specific musical contexts based on factors including but not limited to genre, presence of lyrics, timbres (woodwinds, brass, instrument families), and complexity of the perceived music. Future research and cochlear implant development should target these areas as parameters for improvement in cochlear implant-mediated music perception.

*Department of Otolaryngology—Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California

Johns Hopkins University School of Medicine, Baltimore, Maryland

University of California San Francisco

§University of California San Francisco School of Medicine, San Francisco, California

Address correspondence and reprint requests to Charles J. Limb, M.D., Department of Otolaryngology—Head and Neck Surgery—UCSF, 2233 Post Street, 3rd Floor, San Francisco, CA 94115; E-mail:

C.J.L. is a consultant and receives research support from Advanced Bionics Corporation, Med-El Corporation, and Oticon.

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

The authors disclose no conflicts of interest.

Like speech, music is built on acoustic parameters and elements to convey meaning and information. However, music arguably represents the most challenging auditory stimuli (1) and frequently involves complex resolution mechanisms that are not required with language (2,3). For some cochlear implant (CI) users (particularly postlingually deafened CI users), music is unpleasant as auditory stimuli are largely impoverished and frequently misrepresented via the constraints of electrical hearing. On the contrary, present-day CI systems effectively convey speech such that many CI users approach 80% on sentence recognition tests in quiet environments (4). When it comes to more complex auditory sounds (1,5–12), such as phonetic-dependent languages, voice inflections, voice emotions, and music, CI processing systems have a difficult time delivering the dynamic range, fine temporal and fine spectral information that normal hearing (NH) listeners use. Consequently, it is common for CI users to describe music listening as unsatisfactory (13,14) and to perform poorly on music perception tasks (15,16). The objective of this study is to describe the way that music sounds to a CI user, with emphasis on postlingually deafened subjects, in light of several specific music processing deficits that have been identified.

Back to Top | Article Outline


Cochlear Implants Are Out-of-Tune

Pitch refers to the perceptual correlate of frequency. In addition to absolute frequency, the qualities of pitch height and chroma are important attributes of how pitch is perceived, often represented by a helical structure (17). Although pitch is conveyed by both spatial and temporal cues (18–21), pitch perception is more reliant on spatial cues, known as place-pitch cues (22,23). Previous reports of high performing CI users suggest that pitch discrimination may be achieved by using overlapping center frequencies between individual filters (24–26) and to a lesser degree, from temporal cues obtained with harmonic series processing. With complex-tone pitches and low-frequency tones (up to ∼2000 Hz), temporal encoding becomes particularly relevant in delivering the periodicity rate and its associated fundamental frequency (27). For the majority of CI users, both place-pitch and temporal rate pitch information are disrupted (28).

Back to Top | Article Outline

Music Lacks Bass Frequencies

In a recent study involving 436 temporal bones, the authors reported a mean cochlear length of 37.6 mm with a range of 32 to 43.5 mm (29). However, the longest electrode array on the market is 31.5 mm with several commonly used electrodes measuring around 24 mm. Consequentially, the electrode array does not stimulate certain regions of the cochlea. A theoretical solution to this problem is to insert the electrode array as deep as possible into the cochlea; however, insertion angles greater than 400 degrees increase the likelihood of destroying any residual hearing through traumatic mechanical forces (30,31). This delicate balance in stimulating the apical turns of the cochlea, combined with factors such as electrode array stiffness, variations in cochlear anatomy, and intraoperative events, prevents full utilization of the cochlea for many users. As a result, many patients lack low-frequency sounds associated with the most apical regions of the cochlea (32).

Back to Top | Article Outline

Pitch Range and Clarity Are Reduced

The input frequency range for CIs is often between 100 to 8500 Hz, further limiting pitch processing to a speech-intended frequency spectrum. For many CI users, applying broad frequency bands to the tonotopic basilar membrane results in perceiving a normal acoustic pitch as a higher-pitch sound (33). Additionally, CI users perceive pitch as broad steps rather than as a smooth frequency gradient. A study by Zeng et al. (34) found that changes in pitch between adjacent electrodes were described to be less than the changes reported by NH listeners for the same characteristic frequency along the basilar membrane. These findings suggest that pitch perception in CI users may be affected by frequency compression secondary to sound processing and transmission constraints. Relatedly, the relatively broad electric fields used in CI-mediated hearing lack the precision inner hair cells possess in exciting a particular auditory neuron. This often leads to broad neural excitation, muddling spatial fidelity and creating an altered pitch among CI users.

Back to Top | Article Outline

Melodic Contour Recognition Is Impaired

Melodic contour identification requires excellent perception in detecting changes in pitch, rhythm, and timbre. Given the limits of temporal and spectral resolution in CI users, it is not surprising that CI users are significantly worse than their NH counterparts in musical melody contour performance. In melody recognition tasks without rhythm cues, CI users’ performance was comparable to those of NH listeners using only 1 to 2 spectral channels (35). CI users struggle with extracting melodic pitch particularly when the timbre complexity of a piece is increased (36). Galvin et al. (3,22) found a significant correlation between melodic contour identification and vowel recognition performance, suggesting the importance of frequency allocation and harmonic relationships in melodic contour perception. Interestingly, these sets of experiments demonstrate large intersubject variability with no clear superiority in CI device or sound processing strategy. When CI users were trained on melodic contour recognition, however, their music contour identification performance significantly improved. CI subjects with music experience also tended to be higher performers among the CI cohort and were less susceptible to timbre effects on melodic pitch perception (36).

Back to Top | Article Outline

Polyphonic Pitches Are Perceived as Fused

Although purely monophonic music exists, most Western music is polyphonic. A previous study found that CI subjects are severely impaired in their ability to differentiate between simultaneous pitches compared with their NH counterparts when presented with free-field stimuli because of the perceptual fusion of multiple pitches into a single tone (37). With direct electrical stimulation, however, CI subjects were able to separate polyphonic stimuli using place-pitch, rate-pitch, or a combination of both in pitch perception tasks. In a study involving 10 CI users, subjects were instructed to choose whether a given stimulus consisted of 1, 2, and 3 simultaneous pitches (38,39). Despite the difficulty CI users encountered with the task, they were able to differentiate between polyphonic tone complexes of 2- and 3-pitches when presented electrically. In the 2-pitch condition, subjects were significantly better at accurately identifying 2-pitches when the distance increased between the electrode pairs and there was a reduction in the overlap of stimulated neural populations. At the time of the study, subjects also reported the stimuli to sound pleasant and said it reminded them of how they used to hear music before losing their hearing.

Back to Top | Article Outline

Consonance and Dissonance Are Indistinguishable

Consonance and dissonance are two important features derived from harmonics or pitch intervals between musical notes. The concept of sensory dissonance is thought to be fundamental to the human auditory system and is largely immune to external influence (40–42). Generally speaking, consonance is associated as pleasant and dissonance as unpleasant (43). What is accepted today is that these fundamental features are based mainly on subtle relationships in pitch and frequencies; the simpler the frequency ratio between two tones, the more consonant it sounds (44,45). CI users lack pitch perception and accurate harmonic representation, and consequentially, are deprived of the precise frequency representation required for consonance and dissonance perception. Caldwell et al. (46) presented postlingually deafened CI users with dissonant stimuli structured on the basis of harmonic theories of dissonance and previous studies employing dissonant chords. CI participants ranked these permutations on a Likert scale of very unpleasant (5) to very pleasant (5) and the study results revealed that dissonant melodic stimuli with chord accompaniment did not impact CI user-reported assessment of unpleasantness to the same extent it did with NH controls. The degree that CI participants seemed blunted in their abilities to recognize and process dissonance suggests that CI users may face a major disadvantage in perceiving emotional content in music.

Back to Top | Article Outline

Music Emotion Recognition Is Limited

Numerous studies suggest that CI users, particularly children, have difficulty correctly identifying intended musical emotion compared with their NH peers (47–49). This impairment may be due to the significant handicaps in pitch perception and spectro-temporal fine structure information faced by CI users (50); these limitations likely impact their ability to detect the nuances in frequency ratios and harmonic intervals, a skill critical to understanding intended emotion. One previous study (51) found that CI users rely more heavily on temporal cues than pitch cues in inferring musical emotion, often leading CI users to incorrect conclusions about music's intended valence. This highlights the paucity of accurate pitch fidelity CI users have access to, and therefore, are able to utilize in music perception. Given that a primary purpose of music is conveying emotional information, musical emotion blunting—largely due to constraints posed by spectral processing systems—may help explain why CI users report lower levels of music enjoyment and participation after implantation (52,53).

Back to Top | Article Outline


The spectral processing limitations observed in pitch perception also significantly impact timbre perception for CI users. When a musical note or a complex sound is played, it produces modes of vibrations of several frequencies. At the core, there is a fundamental frequency of which all the other frequencies are multiples. This music parameter is known as a harmonic series and depends on accurate representation of the integers between these frequencies. In a NH individual, the lower-resolved harmonics dominate pitch perception even when multiple frequencies are being transmitted (54,55). Harmonic integrity is also a critical component of timbre identification and pitch perception, and when these overtones are misaligned or compromised, the listening experience is altered.

When a CI sound processor receives an audio signal, it splits the stimuli into several frequency bands spanning roughly 8000 Hz. Each electrode channel covers a relatively broad band of frequencies, making it difficult to resolve individual harmonics. Theoretically, place-pitch mismatch in CI users compromises the integrity of place-coding by altering the intervals between the varying frequencies (56). Currently, place-coding is used to provide information on spectral shape of an acoustic stimulus. As a result, the fundamental frequency of a complex sound is encoded by temporal fluctuations in the envelope of the electrical current presented by the electrode channels. Looi et al. (28) demonstrated that fundamental frequency may be determined using the amplitude envelope if the stimulation rate is approximately four times or more than the fundamental frequency. While a reliance on single-electrode trains is sufficient for CI-mediated melody recognition, performance drops when the fundamental frequencies exceed 300 Hz (57). Place-coding strategies may have potential in providing information on fundamental frequencies at high frequencies where temporal cues are limited (58). Relatedly, harmonic series reduction improves music enjoyment under CI conditions whereas the harmonics are preferred in NH listeners, demonstrating that technical constraints in harmonic transmission continue to exist with present-day CI processing systems (59).

Back to Top | Article Outline

Musical Instruments Are Difficult to Identify

Timbre is the perceptual quality of an acoustic stimulus that differentiates one sound from other sounds of the same pitch and amplitude, often described as tone color. For a complex tone, perception of timbre relies on the spectral shape and temporal envelope, especially at the onset of the acoustic sound (60). Although CIs can generally encode temporal envelope information accurately, they encounter more difficulty with the spectral shape due to engineering limitations in spectral resolving power and dynamic range compression (61). These limitations blunt the overall performance of CI users in timbre discrimination tasks (62). For example, in a study involving 9 CI users and 25 NH controls, CI users encountered difficulty in timbre recognition (62) and had a tendency to confuse instruments across instrumental families. The CI group also tended to recognize instruments that had a rapid and strong attack time. In general, CI recipients are more readily able to identify percussive envelope instruments (e.g., piano) compared with brass or woodwind instruments (63–65). Although familiarity may play a role in timbre recognition tasks (66), general unstructured exposure to music has not been found to improve music performance in children with CIs (67).

While there does not seem to be a clear consensus on aesthetically pleasing timbres among CI users, there is evidence of greater music enjoyment when timbre sound processing is less demanding. In a study of 15 CI users and 24 hearing aid users, both groups rated music involving several instruments to be less pleasant than music played by one instrument (68). Similarly, Kohlberg et al. (69) demonstrated that reengineering musical pieces to simplify instrumental elements improved music listening in CI users.

In a study by Heng et al. (70), CI users were presented with musical “chimera” instruments that combined the temporal envelope of one instrument with the fine structure of a second instrument. Subjects were asked to choose which source instrument the combined chimera most closely resembled. For NH controls, subjects were able to interchangeably use increasing quantities of fine structure or envelope information to base their judgements. In comparison, CI users used temporal envelope information exclusively in their determinations of musical timbre (70). These studies align with the understanding that timbre discrimination, particularly when the acoustic stimulus involves numerous instrumental elements at a time, remains a major weakness of modern-day CI systems (62).

Back to Top | Article Outline

Musical Sound Quality Is Poor

Despite a number of studies indicating poor sound/music appraisal in CI users— i.e., suggesting that CI users do not enjoy music (53,62,71–73)—the CI-mediated musical sound quality is not well studied. Existing studies on sound quality suggest that it is highly diminished in CI users relative to NH listeners (74–76). One CI user writes, “Often I have described what I hear… like when as children we communicated using two tin-cans connected by a long string–very tinny.” This “tinny” quality is commonly used by CI users to describe music (72). Further musical sound quality impairments stem from limited access to low-frequency information (77) in addition to a compressed dynamic range (15).

Back to Top | Article Outline


Basic Rhythm Patterns Are Preserved

Rhythm is described as a pattern of rests and beats that contributes to the structure of sound in time. It is generally found that CI users perceive rhythm patterns at satisfactory levels (78–81), partially due to the fact that acoustic onsets are detectable through the temporal envelopes delivered by CIs. In fact, when compared with NH controls, CIs users usually perform at nearly comparable levels on rhythmic tasks (35,79,82,83). In a study conducted by Phillips-Silver et al. (84), a heterogeneous group of CI users were able to pick out the beat and move in time to Latin Merengue music. Participants’ performances improved with unpitched drum tones, highlighting the nearly normal capacity to discriminate timing events in more difficult rhythmic tasks and the degree of synchrony that occurs between electrical pulse and nerve firing. Similarly, prelingually deaf children with CIs were able to identify familiar songs using pitch and timing cues, and marginally above chance with solely timing cues (85).

With regards to rhythmic clocking (as opposed to pattern identification), Kim et al. (86) studied the integrity of internal rhythmic clocking. In an attempt to investigate perception of time independently of rhythmic pattern, CI users were asked to indicate whether the final beat of a four-beat series presented at different tempos was isochronous or anisochronous (e.g., slightly before or after where an isochronous beat should fall). The study results showed CI users performed comparably to their NH counterparts, consistent with the previous literature indicating that basic rhythmic perception is largely intact with current CI-processing strategies. However, it should be emphasized that studies of rhythm perception in CI users have generally relied upon isolation of rhythmic information. In complex real-world music, overlapping streams of information, often separated by timbre or frequency space rather than temporal space, are generally presented together. For example, both the drum track and bass track may provide critical rhythm information. Given the inability to distinguish musical timbres and subsequent limitations on auditory streaming, it is plausible that a rhythm task that integrates timbre information will quickly reveal limitations in complex rhythm processing for CI users.

Back to Top | Article Outline

How Do CI Users Describe the Sound of Music?

Due largely to a lack of objective and reliable measures, how these impairments impact the experience of music listening for a CI user remains difficult to parse out. To learn more about this, we administered an informal survey regarding listening experiences to CI recipients. Their subjective reflections are included throughout this section to supplement the scientific data presented above.

Back to Top | Article Outline

Enjoyment of Music Varies Widely Among CI Users

CI users’ enjoyment or appraisal of music is variable. While one CI recipient reports, “I get much enjoyment from music, it sounds good to me… I do seek out new music.” Another stipulates, “Music is dissonant, out-of-tune, fuzzy, tinny… In general, music is very unpleasant for me now.” This variation stems from multiple sources. CI implantation and rehabilitation is accompanied by a number of varying factors including but not limited to age at implantation, period of profound deafness before implantation, musical engagement pre- and postimplantation, the type of device, length of electrode array, and processing strategy. These variables can contribute to a range of hearing outcomes after activation, and hence a range in musical sound quality.

Back to Top | Article Outline

Music Listening Is Usually More Enjoyable for Prelingually Deafened Individuals Than Postlingually Deafened Individuals

Relatedly, it is crucial to consider the vast differences in listening experiences of prelingually and postlingually deafened CI recipients given factors related to preimplantation hearing and neuroplasticity. Prelingual CI users have never heard speech and music using normal auditory function. Many were additionally implanted at a young age at which the brain is highly plastic, and have had years to allow for neural pathways to adapt to the novel auditory signal transmitted by the implant. Postlingually deafened CI users, however, often compare music heard through their CI with music they heard when they had normal hearing. A CI user who reports enjoying music reflects, “My severe to profound hearing loss started when I was young… Thus, I do not have much musical memory to compare against. Indeed, existing research suggests that prelingual CI users listen to and enjoy music more than postlingual CI recipients (87,88).

In situations where competing auditory cues are present, music is often described as more of a nuisance than a source of enjoyment. One CI user reports, Sometimes music is just this cacophonous noise where it's… in the way… and I’m trying to hear other things that are going on around me and the music is just overpowering.

Back to Top | Article Outline

Music Sounds Distorted

Unsurprisingly, many CI users report distortions of musical constructs related to pitch. Pitch is distorted for a vast portion of the CI population as exemplified by a lack of low-frequency information, multiple frequencies being perceived as single pitches, and difficulty with following melodic contour information (34,74,77). One CI user stipulates, “I do not have a sense of tune (in-tune/out-of-tune).” This lack of adequate pitch perception can have detrimental effects on music listening, and for those who were musicians before going deaf, it can be especially harmful to their quality of life:

“In all my musical growing up years… I swear I had perfect pitch. I still think I have perfect pitch, and yet I cannot distinguish between pitches… I’m sure that's a major factor in my appreciation of music. Losing music has been a major loss in my life.”

Back to Top | Article Outline

Lyrics Are Helpful

While NH listeners are generally able to enjoy a breadth of musical styles, CI recipients’ enjoyment of music can vary tremendously with instruments and genres. One salient example lies in the presence of vocals in music (53,72). When music is entirely vocal, solos seem easier to follow than ensembles. CI users largely report that hearing vocals in music is difficult, and that when music does contain vocals, it is significantly easier to follow if the lyrics are familiar or when captioning is available:

“What I really enjoy most is listening to music where they’re singing songs, and I can actually see the lyrics. Being able to listen to the music with the words helped me… understand the background of the music, the meaning.”

Back to Top | Article Outline

CI Users Describe Preferences for Timbres

Both spectral distribution and temporal envelope are impaired in CI users (89,90). Thus, it is unsurprising that the CI population exhibits not only difficulty with instrument differentiation but also negative ratings of commonly recognized, orchestral instrument sounds compared with NH listeners (62,71,91). Certain instrumental tones may be perceived more pleasantly than others. In particular, CI users seem to dislike instrument sounds with a higher natural frequency range and instruments in the string family (violin, viola, etc.) (71). Percussive instruments seem to sound more pleasant, possibly corresponding to the preserved rhythm perception in CI-mediated listening (35,78). One CI user writes, “The piano sounds better than horn instruments. I can never… identify the instrument playing except for drums and piano.” This preference for piano over other instruments is paralleled in the literature (53).

Back to Top | Article Outline

Genre Preferences Are Linked to Dominant Genre-specific Acoustic Features

Musical genres exhibit a range of melodic, instrumental, and rhythmic pattern combinations. For example, country music tends to have a consistent beat and prominent melodic line sung by a soloist or ensemble, whereas classical music is usually instrumental and performed by an orchestra or solo instrument. Similarly, hip hop or rap music has an extremely strong rhythmic and vocal lyric component, features preferred by CI users (72). Given the instrumental and pitch-related limitations in CI-mediated music listening, it is unsurprising that distinguishing between genres may be difficult for CI users, or that songs in certain genres are more easily recognizable than those in other genres (92). As one CI user writes, “It is hard for me to classify music into genres. Guess it is not part of my language. The reference to a music-based “language” in this testimonial illuminates an indirect but crucial impact of music perception impairments; music not only sounds distorted through a CI, but these distortions may partially impair its social and relational benefits.

Genres can also differ in musical complexity, defined as variations and novelty of musical structure combined with the previous musical experiences of the listener (93,94). Prevalent theories about the appraisal-complexity relationship stipulate that there is an “optimal complexity” level at which music contains enough novelty to hold the listener's interest, but not so much that she or he becomes overwhelmed or is not able to follow (95). CI users and NH listeners differ in their appraisal of musical complexity. A study by Gfeller et al. (72) suggests that NH listeners enjoy a higher degree of musical complexity while CI users prefer simpler pieces. In the same study, CI users provided lower likability ratings to classical pieces compared with pop or country music, and also rated classical as the most complex (72). Similarly, music consisting of a solo instrument line is perceived more positively by CI users than instrumental ensemble music (68). One CI user remarks, “More complex music like a large band or orchestra is harder for me to relate to. This preference for simpler music (prominent and repetitive rhythmic and melodic patterns, a simple harmonic structure, single instruments) suggests that musical complexity can create distortions in CI-mediated listening, making music difficult to follow and enjoy.

Back to Top | Article Outline


This article highlights the numerous challenges CI users face in music perception and attempts to describe how music sounds through the CI by combining a wide range of studies and sources (Fig. 1). As the literature suggests, CIs have largely focused its efforts on speech comprehension but lack the capacity to address the complex sound processing required for accurate music perception. Some of the critical areas for improvement are pitch and harmonic perception as music continues to sound out-of-tune, dissonant, emotionless, indistinct, and weak in bass frequencies for most CI subjects. With more accurate pitch perception, timbre and harmonic deficits are likely to be partially addressed by true representation of the acoustic stimuli and pitch relationships. In the past decade, the scientific community is becoming more aware of the importance of excellent music perception in CI users; indeed, music perception may represent a higher level of auditory performance than speech perception. As such, a music-focused approach toward CI engineering and development serves as an excellent tool to identify parameters that will lead to improvements in electrical hearing. In parallel, music rehabilitation is slowly gaining attention as a means of improving music performance postimplantation (96–103). While in many cases, this growing interest in music perception has brought positive benefits to some CI users, the degree of improvement that remains to be achieved is vast and further research is needed.

FIG. 1

FIG. 1

Back to Top | Article Outline


1. Limb CJ. Cochlear implant-mediated perception of music. Curr Opin Otolaryngol Head Neck Surg 2006; 14:337–340.
2. Shannon RV. Speech and music have different requirements for spectral resolution. Int Rev Neurobiol 2005; 70:121–134.
3. Galvin JJ, Fu QJ, Shannon RV. Melodic contour identification and music perception by cochlear implant users. Ann N Y Acad Sci 2009; 1169:518–533.
4. Friesen LM, Shannon RV, Baskent D, Wang X. Speech recognition in noise as a function of the number of spectral channels: Comparison of acoustic hearing and cochlear implants. J Acoust Soc Am 2001; 110:1150–1163.
5. Shannon RV. Multichannel electrical stimulation of the auditory nerve in man. I. Basic psychophysics. Hear Res 1983; 11:157–189.
6. Zeng FG. Temporal pitch in electric hearing. Hear Res 2002; 174:101–106.
7. Chatterjee M, Peng S. Processing F0 with cochlear implants: Modulation frequency discrimination and speech intonation recognition. Hear Res 2008; 235:143–156.
8. Luo X, Fu QJ 3rd, Galvin JJ. Vocal emotion recognition by normal-hearing listeners and cochlear implant users. Trends Amplif 2007; 11:301–315.
9. Peng SC, Tomblin JB, Cheung H, Lin YS, Wang LS. Perception and production of mandarin tones in prelingually deaf children with cochlear implants. Ear Hear 2004; 25:251–264.
10. Luo X, Fu QJ. Enhancing Chinese tone recognition by manipulating amplitude envelope: Implications for cochlear implants. J Acoust Soc Am 2004; 116:3659–3667.
11. Ciocca V, Francis AL, Aisha R, Wong L. The perception of Cantonese lexical tones by early-deafened cochlear implantees. J Acoust Soc Am 2002; 111:2250–2256.
12. Wei C, Cao K, Zeng F. Mandarin tone recognition in cochlear-implant subjects. Hear Res 2004; 197:87–95.
13. Lassaletta L, Castro A, Bastarrica M, et al. Changes in listening habits and quality of musical sound after cochlear implantation. Otolaryngol Head Neck Surg 2008; 138:363–367.
14. Kohlberg G, Spitzer JB, Mancuso D, Lalwani AK. Does cochlear implantation restore music appreciation? Laryngoscope 2014; 124:587–588.
15. Limb CJ, Roy AT. Technological, biological, and acoustical constraints to music perception in cochlear implant users. Hear Res 2014; 308:13–26.
16. Looi V, McDermott H, McKay C, Hickson L. The effect of cochlear implantation on music perception by adults with usable pre-operative acoustic hearing. Int J Audiol 2008; 47:257–268.
17. Warren JD, Uppenkamp S, Patterson RD, Griffiths TD. Separating pitch chroma and pitch height in the human brain. Proc Natl Acad Sci U S A 2003; 100:10038–10042.
18. Helmholtz HLF. On the Sensation of Tone. 2nd English ed.1954; New York: Dover Publications, 1–608.
19. Seebeck A. Beohachtungen uber einige Bedingungen der Entstehung von Tonen (Trans: Observations about some conditions of the origin of tone). Ann Physik Chem 1841; 53:417–436.
20. Seebeck A. Uber die Sirene (Trans: About the siren). Ann Physik Chem 1843; 60:449–481.
21. Licklider JC. Cherry C. Auditory frequency analysis. Information Theory. New York: Academic Press; 1956. 253–268.
22. Galvin JJ III, Fu QJ, Nogaki G. Melodic contour identification by cochlear implant listeners. Ear Hear 2007; 28:302–319.
23. Plant KL, McDermott HJ, van Hoesel RJM, Dawson PW, Cowan RS. Factors influencing electrical place pitch perception in bimodal listeners. J Acoust Soc Am 2014; 136:1199–1211.
24. Drennan WR, Oleson JJ, Gfeller K, et al. Clinical evaluation of music perception, appraisal and experience in cochlear implant users. Int J Audiol 2015; 54:114–123.
25. van Besouw RM, Grasmeder ML. From TEMPO+ to OPUS 2: What can music tests tell us about processor upgrades? Cochlear Implants Int 2011; 12 (suppl 2):S40–S43.
26. Landsberger D III, Galvin JJ. Discrimination between sequential and simultaneous virtual channels with electrical hearing. J Acoust Soc Am 2011; 130:1559–1566.
27. Johnson DH. The relationship between spike rate and synchrony in responses of auditory-nerve fibers to single tones. J Acoust Soc Am 1980; 68:1115–1122.
28. Looi V, McDermott H, Mckay C, Hickson L. Music perception of cochlear implant users compared with that of hearing aid users. Ear Hear 2008; 29:421–434.
29. Würfel W, Lanfermann H, Lenarz T, Majdani O. Cochlear length determination using cone beam computed tomography in a clinical setting. Hear Res 2014; 316:65–72.
30. Adunka O, Kiefer J. Impact of electrode insertion depth on intracochlear trauma. Otolaryngol Head Neck Surg 2006; 135:374–382.
31. Zeng FG, Rebscher S, Harrison W, Sun X, Feng H. Cochlear implants: System design, integration, and evaluation. IEEE Rev Biomed Eng 2008; 1:115–142.
32. Greenwood DD. A cochlear frequency-position function for several species—29 years later. J Acoust Soc Am 1990; 87:2592–2605.
33. Grasmeder ML, Verschuur CA, Batty VB. Optimizing frequency-to-electrode allocation for individual cochlear implant users. J Acoust Soc Am 2014; 136:3313–3324.
34. Zeng F, Tang Q, Lu T. Abnormal pitch perception produced by cochlear implant stimulation. PLoS One 2014; 9:e88662.
35. Kong Y, Cruz R, Jones JA, Zeng F. Music perception with temporal cues in acoustic and electric hearing. Ear Hear 2004; 25:173–185.
36. Galvin JJ III, Fu QJ, Oba SI. Effect of a competing instrument on melodic contour identification by cochlear implant users. J Acoust Soc Am 2009; 125:98–103.
37. Donnelly PJ, Guo BZ, Limb CJ. Perceptual fusion of polyphonic pitch in cochlear implant users. J Acoust Soc Am 2009; 126:128–133.
38. Penninger RT, Kludt E, Limb CJ, Leman M, Dhooge I, Buechner A. Perception of polyphony with cochlear implants for 2 and 3 simultaneous pitches. Otol Neurotol 2014; 35:431–436.
39. Penninger RT, Limb CJ, Vermeire K, Leman M, Dhooge I. Experimental assessment of polyphonic tones with cochlear implants. Otol Neurotol 2013; 34:1267–1271.
40. Fritz T, Jentschke S, Gosselin N, et al. Universal recognition of three basic emotions in music. Curr Biol 2009; 19:573–576.
41. Trainor LJ, Tsang CD, Cheung VHW. Preference for sensory consonance in 2- and 4-month-old infants. Music Percept 2002; 20:187–194.
42. Laukka P, Eerola T, Thingujam NS, et al. Universal and culture-specific factors in the recognition and performance of musical affect expressions. Emotion 2013; 13:434–449.
43. Fishman YI, Volkov IO, Noh MD, et al. Consonance and dissonance of musical chords: Neural correlates in auditory cortex of monkeys and humans. J Neurophysiol 2001; 86:2761–2788.
44. Shapira Lots I, Stone L. Perception of musical consonance and dissonance: An outcome of neural synchronization. J R Soc Interface 2008; 5:1429–1434.
45. Schellenberg EG, Trehub SE. Natural musical intervals: Evidence from infant listeners. Psychol Sci 1996; 7:272–277.
46. Caldwell MT, Jiradejvong P, Limb CJ. Impaired perception of sensory consonance and dissonance in cochlear implant users. Otol Neurotol 2016; 37:229–234.
47. Hopyan T, Gordon KA, Papsin BC. Identifying emotions in music through electrical hearing in deaf children using cochlear implants. Cochlear Implants Int 2011; 12:21–26.
48. Shirvani S, Jafari Z, Zarandi MM, Jalaie S, Mohagheghi H, Tale MR. Emotional perception of music in children with bimodal fitting and unilateral cochlear implant. Ann Otol Rhinol Laryngol 2016; 125:470–477.
49. Volkova A, Trehub SE, Schellenberg EG, Papsin BC, Gordon KA. Children with bilateral cochlear implants identify emotion in speech and music. Cochlear Implants Int 2013; 14:80–91.
50. Geurts L, Wouters J. Coding of the fundamental frequency in continuous interleaved sampling processors for cochlear implants. J Acoust Soc Am 2001; 109:713–726.
51. Caldwell M, Rankin SK, Jiradejvong P, Carver C, Limb CJ. Cochlear implant users rely on tempo rather than on pitch information during perception of musical emotion. Cochlear Implants Int 2015; 16 (suppl 3):S114–S120.
52. Migirov L, Kronenberg J, Henkin Y. Self-reported listening habits and enjoyment of music among adult cochlear implant recipients. Ann Oto Rhinol Laryn 2009; 118:350–355.
53. Gfeller K, Christ A, Knutson JF, Witt S, Murray KT, Tyler RS. Musical backgrounds, listening habits, and aesthetic enjoyment of adult cochlear implant recipients. J Am Acad Audiol 2000; 11:390–406.
54. Arehart KH, Croghan NB, Muralimanohar RK. Effects of age on melody and timbre perception in simulations of electro-acoustic and cochlear-implant hearing. Ear Hear 2014; 35:195–202.
55. Moore BCJ. An Introduction to the Psychology of Hearing. 2013; New York: Brill, 1–458.
56. Jiam NT, Pearl MS, Carver C, Limb CJ. Flat-panel CT imaging for individualized pitch mapping in cochlear implant users. Otol Neurotol 2016; 37:672–679.
57. Pijl S, Schwarz DWF. Melody recognition and musical interval perception by deaf subjects stimulated with electrical pulse trains through single cochlear implant electrodes. J Acoust Soc Am 1995; 98:886–895.
58. Geurts L, Wouters J. Better place-coding of the fundamental frequency in cochlear implants. J Acoust Soc Am 2004; 115:844–852.
59. Nemer JS, Kohlberg GD, Mancuso DM, et al. Reduction of the harmonic series influences musical enjoyment with cochlear implants. Otol Neurotol 2017; 38:31–37.
60. Handel S. Moore BC. Timbre perception and auditory object formation. Hearing 2nd ed.San Diego, CA: Academic Press; 1995. 425–461.
61. Shannon R. Tyler RS. Psychophysics. Cochlear Implants: Audiological Foundations. San Diego, CA: Singular Publishing Inc.; 1993. 357–389.
62. Kim SJ, Cho YS, Kim EY, Yoo GE. Can young adolescents with cochlear implants perceive different timbral cues? Cochlear Implants Int 2015; 16:61–68.
63. Gfeller K, Knutson JF, Woodworth G, Witt S, DeBus B. Timbral recognition and appraisal by adult cochlear implant users and normal-hearing adults. J Am Acad Audiol 1998; 9:1–19.
64. Gfeller K, Witt S, Adamek M, et al. Effects of training on timbre recognition and appraisal by postlingually deafened cochlear implant recipients. J Am Acad Audiol 2002; 13:132–145.
65. Nimmons GL, Kang RS, Drennan WR, et al. Clinical assessment of music perception in cochlear implant listeners. Otol Neurotol 2008; 29:149–155.
66. Pressnitzer D, Bestel J, Fraysse B. Music to electric ears: Pitch and timbre perception by cochlear implant patients. Ann N Y Acad Sci 2005; 1060:343–345.
67. Gfeller K, Witt S, Spencer LJ, Stordahl J, Tomblin B. Musical involvement and enjoyment of children who use cochlear implants. Volta Rev 1999; 100:213–233.
68. Looi V, McDermott H, McKay C, Hickson L. Comparisons of quality ratings for music by cochlear implant and hearing aid users. Ear Hear 2007; 28 (2 suppl):59S–61S.
69. Kohlberg GD, Mancuso DM, Chari DA, Lalwani AK. Music engineering as a novel strategy for enhancing music enjoyment in the cochlear implant recipient. Behavi Neurol 2015; 2015:829680.
70. Heng J, Cantarero G, Elhilali M, Limb CJ. Impaired perception of temporal fine structure and musical timbre in cochlear implant users. Hear Res 2011; 280:192–200.
71. Gfeller K, Witt S, Mehr MA, Woodworth G, Knutson J. Effects of frequency, instrumental family, and cochlear implant type on timbre recognition and appraisal. Ann Otol Rhinol Laryngol 2002; 111:349–356.
72. Gfeller K, Christ A, Knutson J, Witt S, Mehr M. The effects of familiarity and complexity on appraisal of complex songs by cochlear implant recipients and normal hearing adults. J Music Ther 2003; 40:78–112.
73. Stordahl J. Song recognition and appraisal: A comparison of children who use cochlear implants and normally hearing children. J Music Ther 2002; 39:2–19.
74. Roy AT, Jiradejvong P, Carver C, Limb CJ. Assessment of sound quality perception in cochlear implant users during music listening. Otol Neurotol 2012; 33:319–327.
75. Lassaletta L, Castro A, Bastarrica M, et al. Musical perception and enjoyment in post-lingual patients with cochlear implants. Acta Otorrinolaringol (Eng) 2008; 59:228–234.
76. Lassaletta L, Castro A, Bastarrica M, et al. Changes in listening habits and quality of musical sound after cochlear implantation. Otolaryng Head Neck 2008; 138:363–367.
77. Roy AT, Penninger RT, Pearl MS, et al. Deeper cochlear implant electrode insertion angle improves detection of musical sound quality deterioration related to bass frequency removal. Otol Neurotol 2016; 37:146–151.
78. Drennan WR, Rubinstein JT. Music perception in cochlear implant users and its relationship with psychophysical capabilities. J Rehabil Res Dev 2008; 45:779–789.
79. Brockmeier SJ, Fitzgerald D, Searle O, et al. The MuSIC perception test: A novel battery for testing music perception of cochlear implant users. Cochlear Implants Int 2011; 12:10–20.
80. Cooper WB, Tobey E, Loizou PC. Music perception by cochlear implant and normal hearing listeners as measured by the Montreal Battery for Evaluation of Amusia. Ear Hear 2008; 29:618–626.
81. McDermott HJ. Music perception with cochlear implants: A review. Trends Amplif 2004; 8:49–82.
82. Innes-Brown H, Marozeau JP, Storey CM, Blamey PJ. Tone, rhythm, and timbre perception in school-age children using cochlear implants and hearing aids. J Am Acad Audiol 2013; 24:789–806.
83. Gfeller K, Woodworth G, Robin DA, Witt S, Knutson JF. Perception of rhythmic and sequential pitch patterns by normally hearing adults and adult cochlear implant users. Ear Hear 1997; 18:252–260.
84. Phillips-Silver J, Toiviainen P, Gosselin N, Turgeon C, Lepore F, Peretz I. Cochlear implant users move in time to the beat of drum music. Hear Res 2015; 321:25–34.
85. Volkova A, Trehub SE, Schellenberg EG, Papsin BC, Gordon KA. Children's identification of familiar songs from pitch and timing cues. Front Psychol 2014; 5:863.
86. Kim I, Yang E, Donnelly PJ, Limb CJ. Preservation of rhythmic clocking in cochlear implant users: A study of isochronous versus anisochronous beat detection. Trends Amplif 2010; 14:164–169.
87. Migirov L, Kronenberg J, Henkin Y. Self-reported listening habits and enjoyment of music among adult cochlear implant recipients. Ann Otol Rhinol Laryngol 2009; 118:350–355.
88. Mitani C, Nakata T, Trehub SE, et al. Music recognition, music listening, and word recognition by deaf children with cochlear implants. Ear Hear 2007; 28 (2 suppl):29S–33S.
89. Zeng F. Temporal pitch in electric hearing. Hear Res 2002; 174:101–106.
90. Rahne T, Plontke SK, Wagner L. Mismatch negativity (MMN) objectively reflects timbre discrimination thresholds in normal-hearing listeners and cochlear implant users. Brain Res 2014; 1586:143–151.
91. Galvin J, Zeng F. Musical instrument perception in cochlear implant listeners. Proc 16th Inter Cong Acoust 135th Meet Acoust Soc Am 1998; 3:2219–2220.
92. Gfeller K, Olszewski C, Rychener M, et al. Recognition of “real-world” musical excerpts by cochlear implant recipients and normal-hearing adults. Ear Hear 2005; 26:237–250.
93. Berlyne DE. Novelty, complexity, and hedonic value. Percept Psychophys 1970; 8:279–286.
94. North AC, Hargreaves DJ. Subjective complexity, familiarity, and liking for popular music. Psychomusicology 1995; 14:77–93.
95. Heyduk RG. Rated preference for musical compositions as it relates to complexity and exposure frequency. Percept Psychophys 1975; 17:84.
96. Moran M, Rousset A, Looi V. Music appreciation and music listening in prelingual and postlingually deaf adult cochlear implant recipients. Int J Audiol 2016; 55 (suppl 2):S57–63.
97. Vandali A, Sly D, Cowan R, van Hoesel R. Training of cochlear implant users to improve pitch perception in the presence of competing place cues. Ear Hear 2015; 36:e1–e13.
98. van Besouw RM, Nicholls DR, Oliver BR, Hodkinson SM, Grasmeder ML. Aural rehabilitation through music workshops for cochlear implant users. J Am Acad Audiol 2014; 25:311–323.
99. Looi V, Gfeller K, Driscoll V. Music Appreciation and training for cochlear implant recipients: A review. Semin Hear 2012; 33:307–334.
100. Driscoll VD. The effects of training on recognition of musical instruments by adults with cochlear implants. Semin Hear 2012; 33:410–418.
101. Driscoll VD, Oleson J, Jiang D, Gfeller K. Effects of training on recognition of musical instruments presented through cochlear implant simulations. J Am Acad Audiol 2009; 20:71–82.
102. Gfeller K, Guthe E, Driscoll V, Brown CJ. A preliminary report of music-based training for adult cochlear implant users: Rationales and development. Cochlear Implants Int 2015; 16 (suppl 3):S22–31.
103. Oba SI, Galvin JJ 3rd, Fu QJ. Minimal effects of visual memory training on auditory performance of adult cochlear implant users. J Rehabil Res Dev 2013; 50:99–110.

Cochlear implants; Harmonics; Hearing; Music perception; Music processing; Pitch; Rhythm; Timbre

Copyright © 2017 by Otology & Neurotology, Inc. Image copyright © 2010 Wolters Kluwer Health/Anatomical Chart Company