Abstract
AbstractHow is semantic information stored in the human mind and brain? Some philosophers and cognitive scientists argue for vectorial representations of concepts, where the meaning of a word is represented as its position in a high-dimensional neural state space. At the intersection of natural language processing and artificial intelligence, a class of very successful distributional word vector models has developed that can account for classic EEG findings of language, i.e., the ease vs. difficulty of integrating a word with its sentence context. However, models of semantics have to account not only for context-based word processing, but should also describe how word meaning is represented. Here, we investigate whether distributional vector representations of word meaning can model brain activity induced by words presented without context. Using EEG activity (event-related brain potentials) collected while participants in two experiments (English, German) read isolated words, we encode and decode word vectors taken from the family of prediction-based word2vec algorithms. We find that, first, the position of a word in vector space allows the prediction of the pattern of corresponding neural activity over time, in particular during a time window of 300 to 500 ms after word onset. Second, distributional models perform better than a human-created taxonomic baseline model (WordNet), and this holds for several distinct vector-based models. Third, multiple latent semantic dimensions of word meaning can be decoded from brain activity. Combined, these results suggest that empiricist, prediction-based vectorial representations of meaning are a viable candidate for the representational architecture of human semantic knowledge.
Publisher
Cold Spring Harbor Laboratory
Reference78 articles.
1. Modulation of the FFA and PPA by language related to faces and places;Social Neuroscience,2008
2. Embodied semantics for actions: Findings from functional brain imaging
3. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors;In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics,2014
4. Content specific information processing and persecutory delusions: An investigation using the emotional Stroop test
5. Bojanowski, P. , Grave, E. , Joulin, A. , & Mikolov, T . (2016). Enriching word vectors with subword information. CoRR, abs/1607.04606. Retrieved from http://arxiv.org/abs/1607.04606
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献