Electroen cephalography correlates of word and non-word listening in children with specific language impairment: An observational study20F0 : Medicine

Journal Logo

Research Article: Observational Study

Electroen cephalography correlates of word and non-word listening in children with specific language impairment: An observational study20F0

Fatić, Saška MSca,b,*; Stanojević, Nina MSca,b; Stokić, Miodrag PhDc; Nenadović, Vanja PhDb; Jeličić, Ljiljana PhDa,b; Bilibajkić, Ružica PhDa; Gavrilović, Aleksandar PhDd,e; Maksimović, Slavica PhDa,b; Adamović, Tatjana PhDa,b; Subotić, Miško PhDa

Author Information
doi: 10.1097/MD.0000000000031840

Abstract

1. Introduction

Specific language impairment (SLI) is a deficit in language comprehension and production that cannot be ascribed to hearing loss, intellectual disabilities, or neurological deficits.[1] One of the suspicions is that SLI underlies in “low/level impairment” in auditory perception.[2,3] Electrophysiological studies have documented that auditory processing in children with SLI is atypical[4] and significantly poorer than that in children with normal speech and language development, showing temporal auditory deficits.[5–7] Temporal auditory deficits are hypothesized to be caused by deferred and abnormal auditory maturation[6,8–10] and may include perceptual difficulties during phonological developments.[11] These difficulties mainly include suprasegmental disabilities that may affect the perception of speech rhythm and syllable stress words.[12] An additional factor of auditory discrimination impairment in SLI could be attention problems.[13] Many studies on mismatch negativity (MMN) responses have shown that perception and discrimination deficits for speech and non-speech sounds occur in children with SLI.[3,4,9] Children with SLI show activation in the right hemisphere during the perception of non-native stress words.[14–16] More specifically, studies have reported higher activation in the right temporal lobe and lower activation in the left temporal lobe, which is not the case in typically developing (TD) children. In addition, there is evidence that children with SLI activate only the left hemisphere during listening compared to TD children.[17] Bishop[18] showed atypical language lateralization in adults with SLI, which may be explained as a compensatory or maladaptive mechanism in SLI brain functioning.[19] According to Badcock and colleagues,[19] SLI children manifested reduced brain activity in the left frontal gyrus compared to their TD peers. Reduced brain activity was found bilaterally in the superior temporal sulcus,[20] superior temporal gyrus,[21] and left parietal regions.[22]

Evidence suggests that TD children show left activation during auditory perception in the inferior frontal regions,[23] posterior temporal,[23,24] and parietal regions.[24,25] However, other studies have shown bilateral activation of the anterior and posterior superior temporal lobe during auditory perception of speech[26–28] and the inferior frontal gyrus in the TD population.[29]

From the aspect of brain waves, lower-alpha oscillations are included in different mental processes, such as attention, whereas higher alpha oscillations occur during semantic processing.[30] Studies on adults have documented the presence of slow alpha oscillations in occipital regions during word listening.[31] According to Straus et al,[32] listening to ambiguous non-word stimuli is associated with theta wave activation in the left frontal and right middle temporal regions, while non-semantic non-word listening is associated with alpha wave activation in parieto-occipital regions compared to meaningful words. In 1 study in which the task consisted of listening and repetition of non-words, 9- to 11-year-old SLI children showed activation of subcortical regions (thalamus and globus pallidus) in contrast to age-matched TD children, who showed cortical activation localized in the bilateral posterior and superior temporal gyrus and frontal gyrus.[33] Similar results were achieved in a study with passive story listening and story listening with a required response, showing activation in the bilateral temporal gyrus and left frontal gyrus.[34] Chen and colleagues,[35] in a study with MMN responses on toddlers with “delayed expressive language development” (authors defined as “late talkers”), showed significant differences in central regions (Fz-Cz) electrodes according to other brain electrodes during perception of Mandarin lexical tone stimuli. These results were obtained for children aged 3 to 5 years, while the differences disappeared at later ages, that is, 6-year-old children.

Electroencephalography (EEG) studies have shown synchronization in the theta band and desynchronization in the alpha band in different memory and perception tasks.[36] Ordinarily, spectral power decreases in alpha brain waves in occipital and parietal regions, while spectral power increases in theta waves in fronto-central regions while performing more or less complex tasks.[36,37] Magnetoencephalographic studies have reported theta-alpha activation in auditory predictive processing in hippocampal and auditory prefrontal regions.[27] Lower brain wave frequencies occur during auditory perception of concrete words (nouns) with a high contribution to EEG coherence between posterior electrodes positioned bilaterally, while higher frequency waves (beta waves) occur during auditory perception of verbs and abstract words with an EEG coherence pattern in frontal regions.[38] Studies on TD children have shown that theta waves are dominant in the anterior brain regions during speech perception in 0 to 3 years old children, while in preschoolers, they are dominant in the posterior regions.[39] During different visual tasks, theta waves are activated in the left parieto-temporal regions of younger children.[40] In adults, theta is mostly linked with attention, working memory[41–43] and cognitive skills.[44,45] The amplitude of theta frequencies decreased with age, whereas beta waves increase.[40] Alpha rhythm frequencies increase with brain development during infancy from 10 months, reaching up to 9 Hz[46] until 4 years.[47] During the period from 7 to 11 years, the alpha rhythm reaches up to 10 Hz in the posterior regions,[48] while another study showed that posterior alpha increased up to 9 Hz between the ages of 3 and 9.[40] Event-related alpha desynchronization occurs during the identification of auditory presented words in memory tasks, while synchronization occurs during coding auditory presented words.[49] In resting-state conditions with eyes open, the frontal beta band is higher than alpha and theta, while alpha is higher in the closed eyes condition.[48]

Studies with resting-state in SLI children showed higher values of spectral power in low alpha and theta waves and lower values of spectral power in high alpha and beta waves compared to SLI children with subclinical EEG discharges.[50]

Our study has been focused on listening stimuli without any verbal task demand.

Based on previous studies, there is a large amount of data on EEG differences in auditory processing between children with SLI and TD children. One of the frequency bands highly sensitive to auditory perception and attention of verbal stimuli which is essential for language acquisition is the alpha frequency band. Therefore, our aim was to examine the activation of the alpha band and its topography between word and non-word listening in a clinical sample of children with SLI compared to their TD peers. Developmental EEG studies on clinical samples are scarce. Variability in the developmental changes of the alpha rhythm is present in the TD population and is caused by the interaction of maturation processes and environmental influences. Indeed, different desynchronization and topography of alpha rhythm have been hypothesized to be involved in different listening tasks (words and non-words), including resting-state as a baseline condition between groups, which can be our specific goal in this study.

2. Methods

2.1. Participants

The sample consisted of 100 participants, divided into 2 groups: an experimental group (E) consisting of 50 children with specific language impairment (SLI) and a control group (C) consisting of 50 children with typical speech and language development TD (Table 1). Both groups were divided by age into 2 subgroups: first subgroup-children aged 4.0 to 4.11 years old (E = 25, C = 25); second subgroup, children aged 5.0 to 5.11 years old (E = 25, C = 25). Children with typical speech-language development were recruited from a local community (kindergarten and personal contacts). Children with SLI were recruited from the Institute for Experimental Phonetics and Speech Pathology “Đorđe Kostić “in Belgrade. SLI children were diagnosed by speech and language pathologists, and they did not have any speech or language therapy services before. The inclusion criteria for the final sample were as follows: all participants were native speakers of the Serbian language, with; normal or corrected-to-normal vision; normal hearing, no neurological impairments; no use of any medications that may affect EEG processing; and normal non-verbal intelligence (existence of a large scatter between performance and verbal IQ coefficients for children with SLI). The inclusion criteria were a performance IQ of 85 or higher, with a language measure 1.25 standard deviations below average.[51] All the participants were boys. The reason for this is the significantly higher presence of speech-language pathology in boys than in girls.[52] All participants were right-handed, according to the Edinburgh Inventory.[53] The study was approved by the scientific council and ethics committee of the Institute for Experimental Phonetics and Speech Pathology and conducted in accordance with the Helsinki declaration on ethical principles and scientific experiments involving Humans. All participants (their parents/guardians) provided written informed consent to participate in the study.

Table 1 - Participants characteristics.
Age (m) PIQ
Mean SD Mean SD
4 to 5-yr-old
TD group 55.76 4.196 103.28 7.950
SLI group 53.88 4.816 100.48 4.691
t(48) 1.472 1.517
P .148 .136
5- to 6-yr-old
TD group 68.04 2.606 103.56 8.632
SLI group 67.20 3.329 100.12 10.030
t(48) 1.046 1.300
P .301 .200
PIQ = performance intelligence coefficient, P = exact P-value is present (based on Student t-test), SD = standard deviation, SLI = specific language impairment, TD = typical development.

2.2. Procedure

2.2.1. Speech-language, intelligence, and handedness assessment.

Speech-language skills were examined in both the experimental and control groups of children before EEG recording. The test material consisted of the following tests: Dictionary test for children from 3 to 7 years old;[54] Peabody Picture Vocabulary Test;[55] and Token test.[54,56] Wechsler intelligence scale [57] and the Brunet-Lezine Scale[58] were used to evaluate cognitive abilities in children and to determine a drop in test achievement on the verbal scale (a high scatter between verbal and performance IQ) which is an important diagnostic criterion for SLI. Handedness measures were obtained with the Edinburgh handedness inventory with the 10 items version: writing, drawing, throwing, using scissors, using a toothbrush, using a knife (without a fork), using a spoon, holding a broom (upper hand), striking a match (hand holding the match), and opening a box/lid.[59]

2.2.2. Stimuli for EEG recording.

It has been used as an auditory stimulus from an existing stimulus database called the EEG protocol for auditory-verbal processing (otherwise used in the laboratory for cognitive neuroscience in Research and Development Institute “Life Activities Advancement Center” situated in Belgrade, Serbia). Stimuli in the EEG protocol consist of words, non-words, sentence questions, and narrative discourse (short known and unknown stories). Words and non-words were used in this study.

The most frequent feminine nouns in the Serbian language (words with the highest frequency of occurrence in the standard Serbian language) according to the Children’s Frequency Dictionary[60] were used as final stimuli in word listening. The final number of selected stimuli words was 5 2-syllable and 5 3-syllable words with consonant-vowel structures. All words were balanced in length (4 sounds per 2-syllable word and 6 sounds per 3-syllable word).

Non-words are words that follow the phonological rules of the Serbian language but do not have semantic meaning in Serbian. When forming the non-word, the frequency of sounds in Serbian as well as the consonant-vocal-consonant-vocal structure were considered. In addition, the sounds that appeared in the word lists were uniform according to the frequency of occurrence in the list of non-words. The final list for each task included 5 2-syllable and 5 3-syllable words, and 5 2-syllable and 5 3-syllable non-words.

Stimulus words and non-words were spoken by a professional male speaker, who read the stimuli 1 by 1 without variation in melody, rhythm, and emotional expression. All stimuli were recorded at a sampling frequency of 44.1 kHz with 16 bits resolution. The stimuli were recorded in a sound-attenuated room using a Handy Recorder H4N (serial number 00217460, ZOOM Corporation, Japan) placed 20 cm from the speaker’s mouth. Recordings were saved as WAV files at 44.1 kHz sampling frequency and 16 bits amplitude resolution. Their average duration was 500 ms (range: 485–525 ms). Individual recordings were used to generate stimuli. The stimuli were presented to the participants binaurally using earphones with earplugs at a sound pressure level of 50 dB.

2.2.3. EEG recordings with data acquisition.

Before the experimental procedure, the participants’ parents were informed of the experiment. EEG was recorded using a Nihon Kohden Corporation EEG 1200 K Neurofax apparatus with an electrocap and silver/silver-chloride (Ag/AgCl) ring electrodes filled with electro-conductive gel. Nineteen EEG channels were recorded (Fp1, Fp2, F3, F4, C3, C4, P3, P4, F7, F8, T3, T4, T5, T6, O1, O2, Fz, Cz, and Pz). The electrodes were positioned according to the 10/20 international system for electrode placement. The reference was set as (C3 + C4)/2, which is the NK-9100K EEG system physical reference, with the ground electrode placed on the forehead. The impedance was maintained below 5 kΩ, with no more than 1 kΩ between the electrodes. The lower filter was set at 0.53 Hz and the upper filter at 35 Hz. Electrooculograms were recorded to detect eye blinks and horizontal or vertical eye movement. Heart-rate sensors were used for online artifact removal. The AC filter was set ON. The sampling rate was 200 Hz.

During the experimental procedure, the participants were seated in a comfortable position in a sound, electrically shielded room, more precisely, in a square-shaped cube made of white nontransparent curtains to eliminate visual stimuli that may influence the experimental tasks. The experimental procedures were performed at approximately noon (12 am ± 2 h). An experienced researcher set up the EEG cap on the child’s head. The parent/legal guardian was present during electrode placement. When the technical requirements for recording were fulfilled, verbal instructions were given to the children at the beginning of each EEG recording.

The first part of the experiment was a 2 minutes resting-state EEG recording (with the possibility of shorter periods of recording depending on children’s attention). The participants’ task was to keep their eyes open for 1 or 2 minutes to minimize their movements (eye blinking, head, and limb movement) as much as possible during the resting state and each other task to minimize artifacts in the EEG trace. The 1-minute resting-state was used as a baseline for comparisons with the auditory processing tasks.

The second part of the experimental procedure was the recording of the EEG signal during the listening task with 2 different stimuli: words and non-words. Word listening (WL) involved listening to 10 different words, while non-word listening (NWL) involved listening to 10 non-words. In total, there were 20 stimuli in the listening task for each participant, with random presentation of words and non-words. The duration of the inter-stimulus interval was 1.5 seconds. All stimuli, in randomized order, were encoded and annotated using stimulus presentation software.

Duration of the experimental procedure was about 30 minutes.

2.2.4. EEG signal pre-processing.

The recorded EEG signals were visually inspected for rough artifacts. These segments were removed from further analyses. Hart-rate artifacts were removed simultaneously during EEG recording using an implemented electrocardiogram filter.

The raw files were then converted to EEG format to be imported into the EEGLAB software working on a MATLAB platform[61] for further analysis. All continuous data were filtered using the FIR band-pass filter with a pass band from 1.6 Hz to 30 Hz. The data were re-referenced to the average. This means that re-referencing occurs in all channels, on average. Independent component analysis was performed to remove eye blinking and muscle activity artifacts from selected EEG segments.

Subsequently, a database of EEG segments for each task and trial was created. For resting-state conditions, data were segmented into 10 seconds epochs. The number of epochs included in further analysis was 5 per resting-state condition per child, which resulted in a total of 500 EEG epochs. For WL or NWL, marked data in the EEG trace were segmented in 1-second epochs. Separately, WL and NWL had 10 trials per participant, resulting in 1000 EEG epochs per task. All data were saved in.set file format. For statistical analysis, we used participants’ average values of 5 trials of resting-state condition and average values of 10 trials for each WL and NWL condition.

2.2.5. EEG signal analyses (power spectra analysis).

We used MATLAB. (2021).version 9.10. (R2021a). Natick, Massachusetts: The MathWorks Inc. script and EEGLAB software for spectral analysis and graphical presentation. The power spectral density estimate was calculated using Welch’s method implemented in MATLAB. The spectral power for the resting-state condition, WLT, and NWLT was determined for an adult alpha rhythm ranging from 8 to 12 Hz.

2.3. Statistical analysis

Descriptive statistics were calculated for baseline demographic and clinical features. Data are presented with as mean (95% confidence interval). The statistical significance between the 2 groups in listening tasks and resting-state conditions was evaluated using independent sample t-tests. Levene’s test for equality of variances was used to assess variance homogeneity, and the equal variances not assumed test or equal variances assumed test were applied. Differences between WLT and NWLT (with normalized values for each individual concerning the values of his resting state) were obtained using 2-way analysis of variance. The normalized values of relative alpha spectral power during WL and NWL were normalized using the following equations:

WordNormalisation=restwordrestandNonWordNormalisation=restnonwordrest

where rest is the relative change in alpha spectral power (SP) during the resting state, word is the relative change in alpha SP during WL, and non-word is the relative change in alpha SP during NWL.

The level of significance was set at P < .05. Statistical analysis was performed using SPSS 21 (IBM, Chicago, IL) 2012 package.

3. Results

3.1. Age group 1 (4 to 5 year)

In all 19 electrodes, the SP of the alpha wave in the resting state, WLT, and NWLT, as well as differences in WLT and NWLT between the 2 groups, were examined using an independent samples t-test. No statistically significant difference was found in the mean value of the alpha SP.

3.2. Age group 2 (5 to 6 year)

Comparison of mean values of alpha SP in 3 conditions: resting-state, a task with WL, and with NWL between 2 groups at 5 to 6-year-old were revealed with an independent samples t-test.

Our results (see Table S1, Supplemental Digital Content, https://links.lww.com/MD/H951, which demonstrates differences in alpha SP mean values of resting-state, normalized mean values of alpha SP during WL and normalized mean values of alpha SP during NWL between the TD and SLI groups), regarding resting-state, indicate, statistically significant differences in 15 electrodes in alpha SP between SLI and TD children aged 5- to 6-year-old in Fp1 t(51) = 3.474 P ˂ .001, Fp2: t(51) = 3.304, P ˂ .002, F3: t(51) = 2.044, P ˂ .046, F4: t(51) = 2.035, P ˂ .048, P3: t(51) = 3.418, P ˂ .002, P4: t(51) = 3.152, P ˂ .003), O1: t(51) = 2.145, P ˂ .037, O2: t(51) = 3.108, P ˂ .004, F7: t(51) = 2.376, P ˂ .021, F8: t(51) = 2.095, P ˂ .041, T3: t(51) = 2.655, P ˂ .011, T5: t(51) = 3.264, P ˂ .002, T6: t(51) = 2.620, P ˂ .012, Fz: t(51) = 2.588, P ˂ .013 and Pz: t(51) = 3.664, P ˂ .001.

Because there was a difference in mean resting-state alpha SP between the SLI and TD groups, we compared normalized values of SP alpha in WL (in the formula given above). The results with alpha SP in WL indicate a statistically significant difference (see Table S1, Supplemental Digital Content, https://links.lww.com/MD/H951, which demonstrates differences in alpha SP mean values of resting-state, normalized mean values of alpha SP during WL and normalized mean values of alpha SP during NWL between the TD and SLI groups) with descriptive statistics in Fp1: t(48) = 3.086 P ˂ .003), Fp2: t(48) = 2.584 P ˂ .013), P3: t(48) = 2.328 P ˂ .024), O2: t(48) = 2.569 P ˂ .013), F7: t(48) = 2.768, P ˂ .008), T3: t(48) = 2.549 P ˂ .014) electrodes. During WL children with TD showed lower values of alpha SP compared with SLI children (Fig. 1). Similar results with more pronounced alpha desynchronization in the TD group were also observed in the task with non-word listening (Fig. 2).

F1
Figure 1.:
Graphs for the statistically significant differences between TD and SLI group during WL task. SLI = specific language impairment, TD = typically developing, WL = word listening.
F2
Figure 2.:
Graphs for the statistically significant differences between TD and SLI group during NWL task. SLI = specific language impairment, TD = typically developing, WL = word listening.

In addition, the normalized values of alpha SP in NWL were compared and indicated significant differences in (see Table S1, Supplemental Digital Content, https://links.lww.com/MD/H951, which demonstrates differences in alpha SP mean values of resting-state, normalized mean values of alpha SP during WL and normalized mean values of alpha SP during NWL between the TD and SLI groups) Fp1: t(48) = 3.403 P ˂ .001), Fp2: t(48) = 3.230 P ˂ .002), P3: t(48) = 2.773 P ˂ .008), P4: t(48) = 2.022 P ˂ .049), T3: t(48) = 2.072 P ˂ .044), T5: t(48) = 3.356 P ˂ .002), T6: t(48) = 2.786 P ˂ .008), Fz: t(48) = 2.122 P ˂ .039), and Pz: t(48) = 3.583 P ˂ .001) electrodes.

The normalized values of mean alpha SP between WL and NWL were compared using 2-way analysis of variance (Table 2), and the group effects were statistically significant. This indicates that TD and SLI children differ in WL and NWL. There were no statistically significant differences in the mean values between the stimuli groups (word-non-word). However, it has been shown that there is no interaction between the stimuli and group (stimulus * group, P > .05 in all electrodes). Hence, this report concludes that there is no difference between word and non-word processing in any of the groups.

Table 2 - Group effects in WL and NWL task for all 19 electrodes.
Effect of group Effect of stimulus Effect of group* stimulus
F P F P F P
Fp1 20.811 .001 .469 .495 .010 .921
Fp2 16.598 .001 .340 .561 .039 .843
F3 2.928 .09 .079 .779 .192 .663
F4 1.142 .288 .051 .823 .009 .923
C3 .014 .907 .570 .452 .829 .365
C4 .630 .429 .616 .434 .052 .821
P3 12.799 .001 .069 .793 .005 .945
P4 .985 .323 .258 .612 .490 .486
O1 .024 .878 .001 .992 .326 .569
O2 6.915 .010 .522 .472 .108 .743
F7 7.436 .008 .042 .837 .037 .849
F8 4.025 .048 .032 .858 .001 .987
T3 10.774 .001 .006 .941 .296 .588
T4 2.493 .118 .012 .911 .896 .346
T5 13.398 .001 .009 .924 .999 .32
T6 10.783 .001 .121 .729 .180 .673
Fz 3.396 .068 .048 .827 1.734 .191
Cz 3.518 .064 1.097 .298 .057 .812
Pz 3.159 .079 1.743 .19 1.742 .19
NWL = non-word listening, P = exact P-value is present (based on Student t-test), F value is present, WL = word listening.

4. Discussion

The present study aimed to examine potential differences in EEG alpha rhythm spectral power during WL and NWL between children with SLI and their peers with TD. There was no statistically significant difference in alpha spectral power during the resting-state and tasks (WL and NWL) in age group 1 (4‐5 years) between TD and SLI children. The absence of a statistically significant difference in age group 1 (younger children) while listening to words and non-words could be explained by the possible immaturity of brain regions.[11,18] Studies have shown the presence of a dominant left-hemispheric theta rhythm in different cognitive tasks[62] while the alpha rhythm is still in the maturation process.[63] In the resting-state condition with eyes open, there were statistically significant differences between TD and SLI age group 2 (5‐6 years), in almost all brain regions (14 of 19 electrodes: prefrontal, anterior temporal, mid-temporal, posterior temporal, parietal, and occipital regions). Activations of heterogeneous regions in the resting state are linked with results of studies by de Bie and collegues[64] where activations include anterior and posterior connections, that is, sensorimotor, audio, and visual regions.

Furthermore, this study showed statistically significant differences in alpha SP between TD and SLI children during WL and NWL. The group effects demonstrated that alpha desynchronization occurred more frequently in the TD group. Results showed that alpha rhythm SP during WL differs between the TD and SLI groups in prefrontal regions bilaterally, left anterior and mid temporal regions, and right posterior temporal and occipital regions. During NWL, the differences between the SLI and TD groups were more pronounced. Differences were observed in the prefrontal, mid-temporal, posterior temporal, and parieto-occipital regions. WL and NWL within these brain regions demonstrated activity of the frontotemporal neural basis of the articulation loop (phonological decoding) strand. Similar findings were reported by Karunanayaka et al[65] regarding Wernicke-Broca loop with an indirect connection to parietal regions involved in listening to speech. Studies with preschoolers have documented that stable alpha rhythm is localized in the occipital and parietal,[27,66] as well as in fronto-parietal regions.[67] Furthermore, in the left frontal region, a stable alpha rhythm occurs during auditory attention.[23] We found an activation of the alpha rhythm in both the left and right temporal regions in TD children during WL and NWL. This finding is in line with other studies that documented brain oscillatory activation in the anterior temporal gyrus during word processing[68] and in the mid-and anterior temporal gyrus during non-word processing.[69] Alpha activation in parietal and occipital regions during non-word processing is similar to other reports, which found alpha rhythm activation in parieto-occipital regions during non-word compared to meaningful word perception,[32] and during memory training tasks.[70] Bilateral parietal and temporal alpha activation during NWL could be explained by the presence of a long-term mental lexicon in parietal regions[71] and Wernicke area activity, localized in multifarious parts of the temporal and parietal regions.[72]

Our findings are in contrast to those of other EEG studies with similar experimental designs, but on different developmental disorders. For example, children with epilepsy had a trend of greater alpha desynchronization in the occipital regions compared to their TD peers, but without statistically significant differences.[73] In a study on healthy adults, Strand et al[74] did not find any relationship between alpha activity in parietal regions and auditory attention.

Our results indicate a certain lack of alpha desynchronization during WL and NWL in children with SLI compared to TD children, especially in regions responsible for auditory attention and perception (temporal and frontal). It is well-documented that children with SLI have difficulties in phonological processing,[12,13] learning new words,[75] non-word repetition,[76] adequate selective attention,[77,78] and poor vocabulary.[79] Our study sheds light on electrophysiological EEG alpha activity in regions involved in auditory processes examined in previous behavioral studies. EEG might serve as a useful tool to provide comprehension of specific deficits in children with SLI and preschoolers, such as deficits in selective auditory attention which has been explored more on behavioral level.[80,81]

5. Conclusions

Alpha activity in heterogeneous brain regions triggered by NWL compared to WL might indicate that non-word perception arouses more brain regions because of the presence of an unknown stimulus. When the brain perceives a known word, it activates the regions related to the mental lexicon. However, when non-word is perceived (a semantically unknown construct) there might be “whole brain activity”-perception, finding semantic background, its position in the mental lexicon, phonological structure (articulation loop), et cetera This finding might be used as an objective marker of SLI, knowing that non-word repetition (or perception) is a strong behavioral diagnostic tool for children with SLI.[76]

The lack of alpha desynchronization in the listening task at the neurophysiological level is consistent with established difficulties in lexical and phonological processing at the behavioral level in children with SLI. The diversified results in SLI children may give conclusions about atypical brain functioning,[18] variability patterns in different processing tasks, or the explanation that SLI children have “compensatory or maladaptive reorganization.”[19] The cogitations presented here have heterogeneous suggestions that are promising for future research. Future research will explicate and illuminate our findings in a larger number of stimuli and in combination with other measurement tools in signal processing, such as event-related potential or functional magnetic resonance imaging (fMRI). Our study, despite many advanced measurement tools for recording brain activity, using EEG as a noninvasive measurement tool and power spectra analysis provides fundamental neurophysiological information that will help in understanding speech and language processing, especially in children with SLI.

6. Limitations

A limitation of this study is that only boys were examined. However, boys have a higher risk of inadequate language performance than girls.[82] Therefore, our findings should be interpreted with caution. A second limitation is the design of this study, in which 2 stimuli were included: words and non-words, while pseudo-words, used in a variety of studies, were not used.[16,74,83] This was chosen (as mentioned before) because non-word repetition (or perception) is a strong diagnostic tool for SLI children.[76] The third limitation also refers to the EEG design study regarding the use of spectral power only. A majority of studies[3,49,84,85] that explored the neural basis of auditory processing have used even-related potential measures and measures with MMN response.

Acknowledgments

This work was partially supported by the Ministry of Education, Science and Technological Development of the Republic of Serbia within the project “Influence of psychophysiological, sociological, and cultural factors on speech and language in the child population.” This project was conducted in cooperation with the Faculty of Medical Sciences at the University of Kragujevac.

Author contributions

Conceptualization: Saška Fatić.

Data curation: Saška Fatić.

Formal analysis: Saška Fatić.

Funding acquisition: Saška Fatić.

Investigation: Saška Fatić, Nina Stanojević, Miodrag Stokić, Ljiljana Jeličić, Slavica Maksimović.

Methodology: Saška Fatić, Miodrag Stokić, Ružica Bilibajkić, Tatjana Adamović, Miško Subotić.

Software: Ružica Bilibajkić.

Supervision: Miodrag Stokić, Vanja Nenadović, Aleksandar Gavrilović, Miško Subotić.

Writing – original draft: Saška Fatić.

Writing – review & editing: Miodrag Stokić, Miško Subotić.

    References

    [1]. Leonard LB. Children With Specific Language Impairment. Cambridge Massachusetts, London, England: MIT Press, 2014
    [2]. Bishop DV, McArthur G. Individual differences in auditory processing in specific language impairment: a follow-up study using event-related potentials and behavioural thresholds. Cortex. 2005;41:327–41.
    [3]. Bishop DV, Hardiman MJ, Barry JG. Auditory deficit as a consequence rather than endophenotype of specific language impairment: electrophysiological evidence. PLoS One. 2012;7:e35851.
    [4]. Kujala T, Leminen M. Low-level neural auditory discrimination dysfunctions in specific language impairment—A review on mismatch negativity findings. Dev Cogn Neurosci. 2017;28:65–75.
    [5]. McArthur GM, Bishop DV. Speech and non-speech processing in people with specific language impairment: a behavioural and electrophysiological study. Brain Lang. 2005;94:260–73.
    [6]. Bishop DV, McArthur G. Immature cortical responses to auditory stimuli in specific language impairment: evidence from ERPs to rapid tone sequences. Dev Sci. 2004;7:F11–8.
    [7]. Basu M, Krishnan A, Weber‐Fox C. Brainstem correlates of temporal auditory processing in children with specific language impairment. Dev Sci. 2010;13:77–91.
    [8]. Korpilahti P, Lang HA. Auditory ERP components and mismatch negativity in dysphasic children. Electroencephalogr Clin Neurophysiol. 1994;91:256–64.
    [9]. Bishop DV. Using mismatch negativity to study central auditory processing in developmental language and literacy impairments: where are we, and where should we be going? Psychol Bull. 2007;133:651–72.
    [10]. Tallal P. Experimental studies of language learning impairments: from research to remediation. Bishop DV, Leonard LB, (eds). In: Speech and Language Impairments in Children. Hove: Psychology Press, 2000:131–155.
    [11]. Hsu HJ, Bishop DV. Sequence‐specific procedural learning deficits in children with specific language impairment. Dev Sci. 2014;17:352–65.
    [12]. Corriveau K, Pasquini E, Goswami U. Basic auditory processing skills and specific language impairment: a new look at an old hypothesis. J Speech Language Hearing Res. 2007;50:647–66.
    [13]. Davids N, Segers E, Van den Brink D, et al. The nature of auditory discrimination problems in children with specific language impairment: an MMN study. Neuropsychologia. 2011;49:19–28.
    [14]. Shafer VL, Schwartz RG, Martin B. Evidence of deficient central speech processing in children with specific language impairment: the T-complex. Clin Neurophysiol. 2011;122:1137–55.
    [15]. Shafer VL, Morr ML, Datta H, et al. Neurophysiological indexes of speech processing deficits in children with specific language impairment. J Cogn Neurosci. 2005;17:1168–80.
    [16]. Friedrich M, Herold B, Friederici AD. ERP correlates of processing native and non-native language word stress in infants with different language outcomes. Cortex. 2009;45:662–76.
    [17]. van Bijnen S, Kärkkäinen S, Helenius P, et al. Left hemisphere enhancement of auditory activation in language impaired children. Sci Rep. 2019;9:1–11.
    [18]. Bishop DV. Cerebral asymmetry and language development: cause, correlate, or consequence? Science. 2013;340:1230531.
    [19]. Badcock NA, Bishop DV, Hardiman MJ, et al. Co-localisation of abnormal brain structure and function in specific language impairment. Brain Lang. 2012;120:310–20.
    [20]. Poeppel D, Idsardi WJ, Van Wassenhove V. Speech perception at the interface of neurobiology and linguistics. Phil Trans Royal Soc B. 2008;363:1071–86.
    [21]. de Guibert C, Maumet C, Jannin P, et al. Abnormal functional lateralization and activity of language brain areas in typical specific language impairment (developmental dysphasia). Brain. 2011;134(Pt 10):3044–58.
    [22]. Ellis Weismer S, Plante E, Jones M, et al. A functional magnetic resonance imaging investigation of verbal working memory in adolescents with specific language impairment. J Speech Language Hearing Res. 2005;48:405–25.
    [23]. Khoshkhoo S, Leonard MK, Mesgarani N, et al. Neural correlates of sine-wave speech intelligibility in human frontal and temporal cortex. Brain Lang. 2018;187:83–91.
    [24]. Zattore RJ, Schönwiesner M. Cortical speech and music processes revealed by functional neuroimaging. JA W, (ed). In: The Auditory Cortex. New York: Springer, 2011:657–677.
    [25]. Kovelman I, Mascho K, Millott L, et al. At the rhythm of language: Brain bases of language-related frequency perception in children. Neuroimage. 2012;60:673–82.
    [26]. Hickok G, Poeppel D. The cortical organization of speech processing. Nat Rev Neurosci. 2007;8:393–402.
    [27]. Recasens M, Gross J, Uhlhaas PJ. Low-frequency oscillatory correlates of auditory predictive processing in cortical-subcortical networks: a MEG-study. Sci Rep. 2018;8:14007.
    [28]. Kuuluvainen S, Nevalainen P, Sorokin A, et al. The neural basis of sublexical speech and corresponding nonspeech processing: a combined EEG–MEG study. Brain Lang. 2014;130:19–32.
    [29]. Berthier ML, Dávila G, Torres-Prioris MJ, et al. Developmental dynamic dysphasia: are bilateral brain abnormalities a signature of inefficient neural plasticity? Front Hum Neurosci. 2020;14:73.
    [30]. Klimesch W. Alpha-band oscillations, attention, and controlled access to stored information. Trends Cogn Sci. 2012;16:606–17.
    [31]. Steinmetzger K, Rosen S. Effects of acoustic periodicity, intelligibility, and pre-stimulus alpha power on the event-related potentials in response to speech. Brain Lang. 2017;164:1–8.
    [32]. Strauß A, Kotz SA, Scharinger M, et al. Alpha and theta brain oscillations index dissociable processes in spoken word recognition. Neuroimage. 2014;97:387–95.
    [33]. Pigdon L, Willmott C, Reilly S, et al. The neural basis of nonword repetition in children with developmental speech or language disorder: an fMRI study. Neuropsychologia. 2020;138:107312.
    [34]. Vannest JJ, Karunanayaka PR, Altaye M, et al. Comparison of fMRI data from passive listening and active‐response story processing tasks in children. J Magn Reson Imag. 2009;29:971–6.
    [35]. Chen Y, Tsao F-M, Liu H-M. Developmental changes in brain response to speech perception in late-talking children: a longitudinal MMR study. Dev Cogn Neurosci. 2016;19:190–9.
    [36]. Klimesch W, Vogt F, Doppelmayr M. Interindividual differences in alpha and theta power reflect memory performance. Intell. 1999;27:347–62.
    [37]. Holm A, Lukander K, Korpela J, et al. Estimating brain load from the EEG. Sci World J. 2009;9:639–51.
    [38]. Weiss S, Mueller HM. The contribution of EEG coherence to the investigation of language. Brain Lang. 2003;85:325–43.
    [39]. Orekhova E, Stroganova T, Posikera I, et al. EEG theta rhythm in infants and preschool children. Clin Neurophysiol. 2006;117:1047–62.
    [40]. Perone S, Palanisamy J, Carlson SM. Age‐related change in brain rhythms from early to middle childhood: links to executive function. Dev Sci. 2018;21:e12691.
    [41]. Cavanagh JF, Frank MJ. Frontal theta as a mechanism for cognitive control. Trends Cogn Sci. 2014;18:414–21.
    [42]. Hsieh L-T, Ranganath C. Frontal midline theta oscillations during working memory maintenance and episodic encoding and retrieval. Neuroimage. 2014;85:721–9.
    [43]. Katahira K, Yamazaki Y, Yamaoka C, et al. EEG correlates of the flow state: a combination of increased frontal theta and moderate frontocentral alpha rhythm in the mental arithmetic task. Front Psychol. 2018;9:300.
    [44]. Braithwaite EK, Jones EJ, Johnson MH, et al. Dynamic modulation of frontal theta power predicts cognitive ability in infancy. Dev Cogn Neurosci. 2020;45:100818.
    [45]. Meyer M, Endedijk HM, Van Ede F, et al. Theta oscillations in 4-year-olds are sensitive to task engagement and task demands. Sci Rep. 2019;9:1–11.
    [46]. Cornelissen L, Kim SE, Lee JM, et al. Electroencephalographic markers of brain development during sevoflurane anaesthesia in children up to 3 years old. Br J Anaesth. 2018;120:1274–86.
    [47]. Marshall PJ, Bar-Haim Y, Fox NA. Development of the EEG from 5 months to 4 years of age. Clin Neurophysiol. 2002;113:1199–208.
    [48]. Miskovic V, Ma X, Chou C-A, et al. Developmental changes in spontaneous electrocortical activity and network organization from early to late childhood. Neuroimage. 2015;118:237–47.
    [49]. Pesonen M, Björnberg CH, Hämäläinen H, et al. Brain oscillatory 1–30 Hz EEG ERD/ERS responses during the different stages of an auditory memory search task. Neurosci Lett. 2006;399:45–50.
    [50]. Nenadović V, Stokić M, Vuković M, et al. Cognitive and electrophysiological characteristics of children with specific language impairment and subclinical epileptiform electroencephalogram. J Clin Exp Neuropsychol. 2014;36:981–91.
    [51]. Tomblin JB, Records NL, Zhang X. A system for the diagnosis of specific language impairment in kindergarten children. J Speech Hear Res. 1996;39:1284–94.
    [52]. Tomblin JB, Records NL, Buckwalter P, et al. Prevalence of specific language impairment in kindergarten children. J Speech Lang Hear Res. 1997;40:1245–60.
    [53]. Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113.
    [54]. Kostic D, Vladisavljevic S. Testovi za Ispitivanje Govora i Jezika [Tests for Speech and Language Assesment]. Belgrade: Zavod za udzbenike i nastavna sredstva. (in Serbian); 1983.
    [55]. Dunn LM, Dunn L, Kovačević M, et al. Peabody slikovni test rječnika, PPVT-IIIHR. Zagreb, Naklada Slap. 2010.
    [56]. Vukovic M, Vukovic I, Stojanovik V. Investigation of language and motor skills in Serbian speaking children with specific language impairment and in typically developing children. Res Dev Disabil. 2010;31:1633–44.
    [57]. Biro M. REVISK Manual (2 ed.). Belgrade: Serbian Psychological Society, 1998.
    [58]. Čuturić N. Ljestvica Psihičkog Razvoja Rane Dječje Dobi Brunet-Lezine: Priručnik. Ljubljana: Zavod za produktivnost dela SR Slovenije, 1973.
    [59]. Groen MA, Whitehouse AJ, Badcock NA, et al. Associations between handedness and cerebral lateralisation for language: a comparison of 3 measures in children. PLoS One. 2013;8:e64876.
    [60]. Lukic V. Decji Frekvencijski Recnik. II ed. Belgrade, Serbia: Belgrade Institute for Educational Research (Prosveta In Serbian); 1983.
    [61]. Brunner C, Delorme A, Makeig S. Eeglab–an open source matlab toolbox for electrophysiological research. Biomed Eng. 2013;58:000010151520134182.
    [62]. Kikuchi M, Shitamichi K, Yoshimura Y, et al. Lateralized theta wave connectivity and language performance in 2- to 5-year-old children. J Neurosci. 2011;31:14984–8.
    [63]. Eisermann M, Kaminska A, Moutard ML, et al. Normal EEG in childhood: from neonates to adolescents. Neurophysiologie clinique. 2013;43:35–65.
    [64]. de Bie HM, Boersma M, Adriaanse S, et al. Resting‐state networks in awake 5‐to 8‐year old children. Hum Brain Mapp. 2012;33:1189–201.
    [65]. Karunanayaka PR, Holland SK, Schmithorst VJ, et al. Age-related connectivity changes in fMRI data from children listening to stories. Neuroimage. 2007;34:349–60.
    [66]. Lyakso E, Frolova O, Matveev Y. Speech features and electroencephalogram parameters in 4-to 11-year-old children. Front Behav Neurosci. 2020;14:30.
    [67]. Foxe JJ, Snyder AC. The role of alpha-band brain oscillations as a sensory suppression mechanism during selective attention. Front Psychol. 2011;2:154.
    [68]. DeWitt I, Rauschecker JP. Phoneme and word recognition in the auditory ventral stream. Proc Natl Acad Sci USA. 2012;109:E505–14.
    [69]. Turkeltaub PE, Coslett HB. Localization of sublexical speech perception components. Brain Lang. 2010;114:1–15.
    [70]. Jaušovec N, Jaušovec K. Working memory training: improving intelligence–changing brain activity. Brain Cogn. 2012;79:96–106.
    [71]. Graham R, LaBar KS. Neurocognitive mechanisms of gaze-expression interactions in face processing and social attention. Neuropsychologia. 2012;50:553–66.
    [72]. Scott SK, Johnsrude IS. The neuroanatomical and functional organization of speech perception. Trends Neurosci. 2003;26:100–7.
    [73]. Krause CM, Boman P-A, Sillanmäki L, et al. Brain oscillatory EEG event-related desynchronization (ERD) and-sychronization (ERS) responses during an auditory memory task are altered in children with epilepsy. Seizure. 2008;17:1–10.
    [74]. Strand F, Forssberg H, Klingberg T, et al. Phonological working memory with auditory presentation of pseudo-words ‐ an event related fMRI Study. Brain Res. 2008;1212:48–54.
    [75]. Sheng L, McGregor KK. Lexical–semantic organization in children with specific language impairment. J Speech Lang Hear Res. 2010;53:146–59.
    [76]. Kalnak N, Peyrard-Janvid M, Forssberg H, et al. Nonword repetition–a clinical marker for specific language impairment in Swedish associated with parents’ language-related problems. PLoS One. 2014;9:e89544.
    [77]. Finneran DA, Francis AL, Leonard LB. Sustained attention in children with specific language impairment (SLI). J Speech Language Hearing Res. 2009;52:915–29.
    [78]. Victorino KR, Schwartz RG. Control of auditory attention in children with specific language impairment. J Speech Lang Hear Res. 2015;58:1245–57.
    [79]. McGregor KK, Oleson J, Bahnsen A, et al. Children with developmental language impairment have vocabulary deficits characterized by limited breadth and depth. Int J Language Commun Dis. 2013;48:307–19.
    [80]. Stevens C, Harn B, Chard DJ, et al. Examining the role of attention and instruction in at-risk kindergarteners: electrophysiological measures of selective auditory attention before and after an early literacy intervention. J Learn Disabil. 2013;46:73–86.
    [81]. Wray AH, Stevens C, Pakulak E, et al. Development of selective attention in preschool-age children from lower socioeconomic status backgrounds. Dev Cogn Neurosci. 2017;26:101–11.
    [82]. Norbury CF, Gooch D, Baird G, et al. Younger children experience lower levels of language competence and academic progress in the first year of school: evidence from a population study. J Child Psychol Psych. 2016;57:65–73.
    [83]. Preston JL, Felsenfeld S, Frost SJ, et al. Functional brain activation differences in school-age children with speech sound errors: speech and print processing. J Speech Lang Hear Res. 2012;55:1068–82.
    [84]. Steinmetzger K, Rosen S. Effects of acoustic periodicity and intelligibility on the neural oscillations in response to speech. Neuropsychologia. 2017;95:173–81.
    [85]. Astheimer LB, Sanders LD. Temporally selective attention supports speech processing in 3-to 5-year-old children. Dev Cogn Neurosci. 2012;2:120–8.
    Keywords:

    SLI; spectral signal analysis methods; word/non-word listening

    Supplemental Digital Content

    Copyright © 2022 the Author(s). Published by Wolters Kluwer Health, Inc.