Abstract
AbstractSensory processing is increasingly conceived in a predictive framework in which neurons would constantly process the error signal resulting from the comparison of expected and observed stimuli. Surprisingly, few data exist on the amount of predictions that can be computed in real sensory scenes. Here, we focus on the sensory processing of auditory and audiovisual speech. We propose a set of computational models based on artificial neural networks (mixing deep feed-forward and convolutional networks) which are trained to predict future audio observations from 25 ms to 250 ms past audio or audiovisual observations (i.e. including lip movements). Experiments are conducted on the multispeaker NTCD-TIMIT audiovisual speech database. Predictions are efficient in a short temporal range (25-50 ms), predicting 40 to 60 % of the variance of the incoming stimulus, which could result in potentially saving up to 2/3 of the processing power. Then they quickly decrease to vanish after 100 ms. Adding information on the lips slightly improves predictions, with a 5 to 10 % increase in explained variance.Interestingly the visual gain vanishes more slowly, and the gain is maximum for a delay of 75 ms between image and predicted sound.
Publisher
Cold Spring Harbor Laboratory
Reference75 articles.
1. Some informational aspects of visual perception.
2. H. B. Barlow , “Possible principles underlying the transformations of sensory messages,” in Sensory Communication ( W. Rosenblith , ed.), p. 217–234, Cambridge, MA: MIT press, 1961.
3. A theory of cortical responses
4. Learning and inference in the brain
5. Reinforcement Learning or Active Inference?