Abstract
AbstractSocial communication problems, which are a core symptom of autism, may be evident in the speech characteristics of autistic individuals. Here, we examined acoustic and conversational features extracted from audio recordings of Autism Diagnostic Observation Schedule, 2ndedition (ADOS-2) assessments of 1.5-7-year-old children. We trained a deep neural network algorithm to estimate autism severity (i.e., ADOS-2 scores) from recordings of 146 children and tested its accuracy with independent recordings from 62 additional children who completed two ADOS-2 assessments, separated by 1-2 years. Estimated ADOS-2 social affect scores in the test set were significantly correlated with true scores at each time-point (r(62)=0.442-0.575,P<0.001), and estimated changes across time-points were significantly correlated with true changes (r(62)=0.343,P=0.011). The presented algorithm learned to estimate social symptom severity from speech recordings of one autism group and was able to accurately estimate severity changes in an entirely independent group. While accuracy needs to be further improved by training with larger datasets, these results demonstrate the remarkable utility of speech analysis algorithms in estimating autism risk and severity changes over time.
Publisher
Cold Spring Harbor Laboratory