Secondary Logo

Online Only

Access online-exclusive articles and updates published ahead of print.

Thursday, January 3, 2019

Deaf Child Rehabilitation Before CI Surgery

‚ÄčBy Sandro Burdo,  MD

The first year of life is the most important time for linguistic and cognitive development as human beings exploit their higher neuro-plasticity capacity for learning. In this period, the inter-neural connections will form on the basis of sensory experience mainly of the hearing apparatus, which is a real cognitive hub for the configuration of the individual connectome (Lancet Neurol. 2016; May;15(6):610). This is why congenital hearing loss not only involves hearing difficulties but also affects the linguistic and cognitive spheres.

KNOWLEDGE OF THE PAST IS A RESOURCE FOR THE PRESENT

Today, it is well-established that hearing is crucial for the brain maturation, which makes newborn hearing screening vital. In the same way, it is clear that a cochlear implant (CI) surgery is the preferred treatment for disabling deafness. However, due to evident and practical reasons, CI surgery cannot be carried out at birth and has to be postponed by some months, during which other rehabilitative activities must be organized. The aim of these activities is not to complete hearing recovery but to kick-start adequate linguistic and cognitive development.

For this purpose, clinicians should not forget that the residual hearing ability of a profoundly deaf child can be used as a support to activate the communicative pre-requisites of verbal communication. This way the deaf child is able to develop the skills to detect and meaningfully process speech that can be phonetically discriminated through lip reading*[1] but not through the hearing. As such, before CI surgery, it is essential to use visual inputs to guarantee linguistic-cognitive development, as residual hearing is insufficient.

But residual hearing must not be abandoned; it must be stimulated for functions it can fulfill without pushing for other unreachable functions like phonetic discrimination.

Before the age of CIs, it was well-established that severely deaf people were capable of perceiving low-frequency sounds, activating spatial awareness, attention, auditory detection, and processing prosodic speech features (acoustic contour of speech) that transmit communicative meaning during the first months of life (semantic prosody; RERC, 2000).

The rationale was about the role of residual hearing in activating communication pre-requisites so that lip reading could advance to phonetic discrimination, and consequently, the central nervous system could elaborate words (identify then recognize) to reach verbal comprehension (see Fig. 1). To get effective results, hearing aids had to be fit not only to deliver high amplification but also to extend this feature to low-frequency sounds, amplifying acoustic energy especially in the bandwidth where the child has more efficient residual hearing (Ross 2000).

Burdo Figure 1.jpg

Unfortunately, high amplification of the low-frequency sounds is taken for granted in modern digital hearing aids that can sufficiently compensate all degrees of hearing loss (except the more profound ones). Furthermore, the bandwidth limit is associated with a high cost due to the sophisticated electronics that are hardly ever used in this kind of hearing loss. We can call these powerful, low-frequency hearing aids "prosodic" as they leave out sophisticated acoustic features that are useless for people with profound hearing loss. Thanks to these characteristics, prosodic hearing aids can be sold at significantly lower prices.

Going back to clinical practice, it seems obvious that the clinicians must fully understand two things to ensure an optimal pre-operative rehabilitation: They must activate the pre-requisites for verbal communication and facilitate the development of lip reading skills enable rehabilitation. Once again, the importance of communication pre-requisites must be underlined because, before the surgery of people with severe-to-profound deafness, it is not possible to enable any further processing after prosodic discrimination. However, it should be noted that any communicative process cannot start without activating the communication prerequisites of awareness and attention.

LEVELS OF HEARING

Another form of hearing transmitted in the low frequencies is unconscious primitive hearing, which was described by Ramsdell at the end of World War II as one of the skills lost after a sudden hearing loss (Ramsdell 1978). Ramsdell described four hearing levels as follows:

1. The primitive level gives us the subconscious perception of background sounds completing the 180 degrees of the visual space for the environment's control. Ramsdell asserts that the perception of these sounds maintains our feeling of being a part of a living world. He comes to the conclusion that the loss of sound's perception on the primitive level is the major cause of depressive feelings reported by deaf adult patients. Furthermore, the lack of this hearing level could explain the motor hyperactivity that, as we have seen in our clinical experience, is highly common in deaf children. It may be related to the patient's needs of checking the environment around them and it is indicative that this kind of hyperactivity disappears with a correct sensory stimulation. The lack of these primitive functions could also be the cause of neck muscles hypotonia in babies. This inhibits proper contraction in response to a stimulus. It is useful to understand that the primitive level is sustained by the noise floor where low-frequency sounds are prevalent. It is also important to outline the concept that the noise floor keeps continuously active the unconscious primitive hearing level, but the same acoustic stimulus can stir the individual "awareness" when the brain decides to be conscious of the same sounds.

2. The warning level alerts and prepares us for action during a variation of the noise floor. At this level, we activate our attention, for example, to listen. 

3. The aesthetic level involves sounds that have an impact on our feelings.

4.  Finally, the symbolic level is when we understand speech to be informed, educated, entertained, and so on.

TECHNOLOGY IN  PRESURGERY REHABILITATION

During the first months of life, it is important to provide the child with the necessary stimuli to activate the pre-requisites for communication and enable primitive hearing to give the child a feeling of safety. These stimuli can be provided using two devices: hearing aids for low-frequency amplification (prosodic hearing aids for primitive hearing, awareness, sounds detection, and semantic prosody) and a single-channel pre-sternal vibrator that has the unique function of activating communicative attention through signals (both environmental and verbal), giving "eyes in the back of one's head." While the skin doesn't allow for speech discrimination, it can be used to transmit the basic sounds. Recent research on the skin and speech discrimination have been confined to laboratories without any large practical diffusion. Moreover, our choice to place the vibrator on the skin over sternum was confirmed experimentally to be correct by Suarez, et. al (1997).

The vibrator and hearing aids must be used simultaneously all day and not only during the rehabilitation sessions. Only the combined use of these two devices can stimulate primitive hearing and activate the pre-requisites of communication in addition to prosodic discrimination. Clinical reports by Parravicini, et al., showed that using each device by itself is not as efficient as their combined use (2016). Primitive hearing and communication pre-requisites enable the easy construction of an individual connectome, which does not remain unaltered in the absence of stimulation but is destroyed without. As such, technology has to be used continuously.

Previous incorrect comparisons were made between the results obtained using CIs and a sternal vibrator. The results underlined the superiority of CIs (Carney 1993), and so sternal vibrators were eventually abandoned. However, this traditional method could still be useful in the first months of life of a severely deaf child until he or she undergoes surgery.

Some researchers also suggested the prolonged use of hearing aids and a sternal vibrator. Nittrouer and Chapman (2009) have demonstrated that children could benefit from a period of bimodal stimulation by delaying the bilateral surgery since prosody could aid in learning how to perceptually organize the signal received through a CI. Huang et al., (2017) combined electrical stimulation with tactile stimulation of the index finger, obtaining better results in noise when compared with electric stimulation alone.

If these last experiences confirm that tactile stimulation and prosodic amplification are beneficial in language acquisition and speech recognition, then it follows that they are absolutely indispensable in the period before the CI surgery because they represent the only tools to free the deaf child from silence.

Any treatment undertaken before cochlear implantation cannot be limited to checking whether a non-invasive technology is insufficient for a deaf child's complete rehabilitation, thereby justifying the CI surgery. Instead, clinicians must help a deaf child gain control his or her environment and form linguistic-cognitive connectome during this developmental period, keeping in mind the rule "Use it or lose it" in neurocognitive maturation (Burdo 2018).

[1] We use the terms "lip reading" in place of "speech reading" only for a historical reason because  the second expression is more correct (Ross 2000)

Burdo headshot.pngABOUT THE AUTHOR: 

Dr. Burdo is the Scientific Responsible of the Italian Association Free to Hear (www.liberidisentire.it) and a consultant otologist at the Bassini Hospital in Milan, Italy. 

He was the director of the audiovestibology unit at Varese Circolo Hospital where he led one of the main European centers for deaf rehabilitation.