This study investigated automatic assessment of vocal development in children with hearing loss compared with children who are typically developing, have language delays, and have autism spectrum disorder. Statistical models are examined for performance in a classification model and to predict age within the four groups of children.
The vocal analysis system analyzed 1913 whole-day, naturalistic acoustic recordings from 273 toddlers and preschoolers comprising children who were typically developing, hard of hearing, language delayed, or autistic.
Samples from children who were hard of hearing patterned more similarly to those of typically developing children than to the language delayed or autistic samples. The statistical models were able to classify children from the four groups examined and estimate developmental age based on automated vocal analysis.
This work shows a broad similarity between children with hearing loss and typically developing children, although children with hearing loss show some delay in their production of speech. Automatic acoustic analysis can now be used to quantitatively compare vocal development in children with and without speech-related disorders. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention.
Automatic analyses of a large dataset of naturalistic child vocalizations were performed on recordings from children who were typically developing, were hard of hearing, were language delayed, and had autism spectrum disorder. Based on the automatic audio processing, the statistical models were able to accurately classify children and predict child age, taking group classification into account. Children who were hard of hearing patterned more similarly to the typically developing group of children than the language-delayed children or children with autism. The work may serve to better distinguish among various developmental disorders and ultimately contribute to improved intervention.
1Department of Speech & Hearing Sciences, Medical Sciences, Washington State University, Spokane, Washington, USA; 2School of Communication Sciences and Disorders and 3Institute for Intelligent Systems, University of Memphis, Memphis, Tennessee, USA; 4Konrad Lorenz Institute for Evolution and Cognition Research, Klosterneuburg, Austria; 5Center for Childhood Deafness, Boys Town National Research Hospital, Omaha, Nebraska, USA; 6Nuance Communications, Burlington, Massachusetts, USA; 7LENA Research Foundation, Boulder, Colorado, USA; and 8University of Cincinnati, Cincinnati, Ohio, USA.
This work was supported by National Institutes of Health Grants/NIDCD R01-DC009560 (co-principal investigators, J. Bruce Tomblin, University of Iowa, and M.P.M., Boys Town National Research Hospital) and (LENA Supplement) R01-DC009560-01S1. The role of D.K.O. in the paper was supported by Grant R01-DC011027 from the National Institute on Deafness and Other Communication Disorders and by the Plough Foundation. J.A.R., D.X., and J.G. are employees of the LENA Research Foundation. S.G. is a former employee of the LENA Research Foundation. D.K.O. is an unpaid member of the Scientific Advisory Board of the LENA Research Foundation. The content of this project is solely the responsibility of the authors and does not necessarily represent the official views of the National Institute on Deafness and Other Communication Disorders or the National Institutes of Health.
Received May 21, 2014; accepted November 24, 2014.
Address for correspondence: Mark VanDam, Department of Speech & Hearing Sciences, Medical Sciences, Washington State University, 412 E. Spokane Falls Boulevard, Spokane, WA 99202, USA. E-mail: firstname.lastname@example.org