Secondary Logo

Journal Logo

Spoken-language processing model: a more expansive view to examining auditory processing of spoken language

Medwetsky, Larry PhD; Musiek, Frank PhD

doi: 10.1097/01.HJ.0000399145.99879.f5
PATHWAYS
Free

Larry Medwetsky, PhD, is Vice President of Clinical Services at Rochester Hearing and Speech Center in New York. Frank E. Musiek, PhD, is Professor and Director of Auditory Research, Department of Communication Sciences, University of Connecticut. Readers are invited to suggest questions to be answered in future Pathways columns to Dr. Musiek at frank.musiek@uconn.edu.

Figure. F

Figure. F

I would like to first thank Dr. Musiek for this opportunity of sharing my conceptualization of what I refer to as spoken-language processing. Many of you may know that central auditory processing (CAP) has been a subject of heated debate since the early 1970s. Because of the lack of a clear definition of what constitutes CAP, a number of task forces have convened to develop consensus statements and establish best practice principles related to its diagnosis and management.1,2

In its most recent attempt to achieve consensus, the second ASHA task force sought to reconcile the importance of auditory processing with the recognition that complete modality specificity (i.e., without influence or interactions from any other modality) as a diagnostic criterion is neurophysiologically untenable. In fact, cognitive mechanisms have been shown to influence auditory mechanisms as early as the cochlea3 while incoming speech stimuli are converted to phonemes as early as 250 milliseconds post stimulus onset.4

It is my belief that audiologists do not need to restrict ourselves only to the auditory processing aspects; in fact, early researchers in our field focused on speech perception, rather than just the auditory mechanisms. I have crafted what I refer to as the Spoken-Language Processing Model, which recognizes that spoken-language processing involves the successful intertwining of auditory, cognitive, and language processes5. The following is a brief synopsis of the key processing stages as conceptualized in this model:

  • Incoming auditory stimuli are converted via multiple transformations to neuroelectric patterns that are compared with neuronal templates stored in long-term memory (LTM).
  • Linguistic information (phonemes, words, semantic relations, syntax) is processed in the left hemisphere, while suprasegmental aspects are processed in the right hemisphere.
  • If there is a match and sufficient attention, the LTM representation is activated (the activated state referred to as short-term or conscious memory). This process, known as decoding, must be done quickly and accurately—the more organized the neuronal connections/templates, the faster the processing speed.
  • The linguistic representations are somehow integrated with the suprasegmental features “on the fly.”
  • Information can reside in short-term memory for a very short period of time (approximately 2 seconds) unless attention is directed to maintain the stimuli in short-term memory.
  • The processed information must be maintained in the same order as presented.
  • Individuals must often listen in the presence of competing noise. In order to attend to “target speech stimuli” in non-linguistic noise, the brain analyzes the competing acoustic streams and filters the speech from the noise. In the case of competing talkers, the listener relies primarily on spatial separation and fundamental frequency differences to selectively attend to the target talker and block out the competing talker(s).
  • A separate process that evolves over time is the establishment of individual sound families (phonemes) and their symbolic representations.

Breakdowns can occur in any of the various auditory processing stages. Examples include:

  • Initial auditory transmission of the acoustic signal, an extreme example being neural dys-synchrony. In this disorder, there is normal cochlear integrity but the neural pathways leading to the cortical processing regions are unable to process speech effectively.
  • Difficulty with temporal resolution, especially in the precise processing of rapidly occurring acoustic changes of speech stimuli.
  • Impaired inter-hemispheric transfer of information, affecting the successful integration of the suprasegmental aspects processed in the right hemisphere with the linguistic aspects (phonemes, lexemes, semantic and syntactic relations) processed in the left hemisphere.

Breakdowns can also occur in any of the higher order cognitive/language mechanisms such as:

  • Disorganized representation/organization of lexicon, which can impact one's lexical decoding speed.
  • Attentional allocation, which can impact the ability to retain earlier presented information in short-term memory or even in the initial processing of information.
  • Sequencing, which can lead to difficulty maintaining information/directions in the correct order, or in disorganization.
  • Phonological awareness (i.e., awareness of the phonological structure that makes up words and the ability to detect and manipulate speech sounds)

The Spoken-Language Processing Model attempts to examine how auditory, cognitive, and language processes interface. The model serves as a basis for crafting a comprehensive test to determine if processing deficits are present and their specific nature, and in turn, guide management.

The fundamental difficulty may be an underlying auditory processing deficit, and/or one of a cognitive-linguistic nature. Through the judicious application of assessment tools—many of which require the use of an audiometer—audiologists are often the professionals who can best delineate where these breakdowns are occurring. Consequently, audiologists can provide critical information that can truly make a difference in the lives of those we serve.

Back to Top | Article Outline

REFERENCES

1. American Speech-Language-Hearing Association Task Force on Central Auditory Processing Consensus Development. Central auditory processing: current status of research and implications for clinical practice. Am J Aud 1996;5:41-54.
2. American Speech-Language-Hearing Association. (Central) auditory processing disorders, technical report: Working group on auditory processing disorders. Rockville, MD: Author, 2005.
3. Maison S, Micheyl C, Collet L: Influence of focused auditory attention on cochlear activity in humans. Psychophysiology 2001;38:35-40.
4. Naatanen R: The perception of speech sounds by the human brain as reflected by the mismatch negativity (MMN) and its magnetic equivalent (MMNm). Psychophysiology 2001;38:1-21.
5. Medwetsky L: Mechanisms underlying central auditory processing. Katz J, ed. Handbook of Clinical Audiology, 6th edition. Philadelphia: Lippincott Williams and Wilkins, 2009.
© 2011 Lippincott Williams & Wilkins, Inc.