Abstract
AbstractWhen listening to speech, our brain responses time-lock to acoustic events in the stimulus. Recent studies have also reported that cortical responses track linguistic representations of speech. However, tracking of these representations is often described without controlling for acoustic properties. Therefore, the response to these linguistic representations might reflect unaccounted acoustic processing rather than language processing. Here, we evaluated the potential of several recently proposed linguistic representations as neural markers of speech comprehension. To do so, we investigated EEG responses to audiobook speech of 29 participants (22 ♀). We examined whether these representations contribute unique information over and beyond acoustic neural tracking and each other. Indeed, not all of these linguistic representations were significantly tracked after controlling for acoustic properties. However, phoneme surprisal, cohort entropy, word surprisal, and word frequency were all significantly tracked over and beyond acoustic properties. We also tested the generality of the associated responses by training on one story and testing on another. In general, the linguistic representations are tracked similarly across different stories spoken by different readers. These results suggests that these representations characterize processing of the linguistic content of speech.Significance StatementFor clinical applications it would be desirable to develop a neural marker of speech comprehension derived from neural responses to continuous speech. Such a measure would allow for behaviour-free evaluation of speech understanding; this would open doors towards better quantification of speech understanding in populations from whom obtaining behavioral measures may be difficult, such as young children or people with cognitive impairments, to allow better targeted interventions and better fitting of hearing devices.
Publisher
Cold Spring Harbor Laboratory
Reference55 articles.
1. Accou, B. , Monesi, M. J. , Montoya, J. , Van hamme, H. , and Francart, T. (2020). Modeling the relationship between acoustic stimulus and eeg with a dilated convolutional neural network. In 2020 28th European Signal Processing Conference (EUSIPCO), pages 1175–1179. IEEE.
2. Human Cortical Responses to the Speech Envelope
3. Auditory-inspired speech envelope extraction methods for improved eeg-based auditory attention detection in a cocktail party scenario;IEEE Transactions on Neural Systems and Rehabilitation Engineering,2016
4. Hierarchical structure guides rapid linguistic predictions during naturalistic listening
5. Brodbeck, C . (2020). Eelbrain 0.32. http://doi.org/10.5281/zenodo.3923991.