Author:
Desbordes Théo,Lakretz Yair,Chanoine Valérie,Oquab Maxime,Badier Jean-Michel,Trébuchon Agnès,Carron Romain,Bénar Christian-G.,Dehaene Stanislas,King Jean-Rémi
Abstract
AbstractA sentence is more than the sum of its words: its meaning depends on how they combine with one another. The brain mechanisms underlying such semantic composition remain poorly understood. To shed light on the neural vector code underlying semantic composition, we introduce two hypotheses: First, the intrinsic dimensionality of the space of neural representations should increase as a sentence unfolds, paralleling the growing complexity of its semantic representation, and second, this progressive integration should be reflected in ramping and sentence-final signals. To test these predictions, we designed a dataset of closely matched normal and Jabberwocky sentences (composed of meaningless pseudo words) and displayed them to deep language models and to 11 human participants (5 men and 6 women) monitored with simultaneous magneto-encephalography and intracranial electro-encephalography. In both deep language models and electrophysiological data, we found that representational dimensionality was higher for meaningful sentences than Jabberwocky. Furthermore, multivariate decoding of normal versus Jabberwocky confirmed three dynamic patterns: (i) a phasic pattern following each word, peaking in temporal and parietal areas, (ii) a ramping pattern, characteristic of bilateral inferior and middle frontal gyri, and (iii) a sentence-final pattern in left superior frontal gyrus and right orbitofrontal cortex. These results provide a first glimpse into the neural geometry of semantic integration and constrain the search for a neural code of linguistic composition.Significance statementStarting from general linguistic concepts, we make two sets of predictions in neural signals evoked by reading multi-word sentences. First, the intrinsic dimensionality of the representation should grow with additional meaningful words. Second, the neural dynamics should exhibit signatures of encoding, maintaining, and resolving semantic composition. We successfully validated these hypotheses in deep Neural Language Models, artificial neural networks trained on text and performing very well on many Natural Language Processing tasks. Then, using a unique combination of magnetoencephalography and intracranial electrodes, we recorded high-resolution brain data from human participants while they read a controlled set of sentences. Time-resolved dimensionality analysis showed increasing dimensionality with meaning, and multivariate decoding allowed us to isolate the three dynamical patterns we had hypothesized.
Publisher
Cold Spring Harbor Laboratory
Reference118 articles.
1. A compositional neural code in high-level visual cortex can explain jumbled word reading
2. Mental compression of spatial sequences in human working memory using numerical and geometrical primitives;Neuron,2021
3. Badier, J. M. , Dubarry, A. S. , Gavaret, M. , Chen, S. , Trébuchon, A. S. , Marquis, P. , Régis, J. , Bartolomei, F. , Bénar, C. G. , & Carron, R . (2017). Technical solutions for simultaneous MEG and SEEG recordings: Towards routine clinical use. 38(10), N118–N127. https://doi.org/10.1088/1361-6579/aa7655
4. Functional stereotaxic exploration (SEEG) of epilepsy;Electroencephalography and Clinical Neurophysiology,1970
5. Syntactic Unification Operations Are Reflected in Oscillatory Dynamics during On-line Sentence Comprehension;Journal of Cognitive Neuroscience,2009
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献