Abstract
AbstractLexical Semantics is concerned with how words encode mental representations of the world, i.e., concepts. We call this type of concepts, classification concepts. In this paper, we focus on Visual Semantics, namely, on how humans build concepts representing what they perceive visually. We call this second type of concepts, substance concepts. As shown in the paper, these two types of concepts are different and, furthermore, the mapping between them is many-to-many. In this paper we provide a theory and an algorithm for how to build substance concepts which are in a one-to-one correspondence with classifications concepts, thus paving the way to the seamless integration between natural language descriptions and visual perception. This work builds upon three main intuitions: (i) substance concepts are modeled as visual objects, namely, sequences of similar frames, as perceived in multiple encounters; (ii) substance concepts are organized into a visual subsumption hierarchy based on the notions of and ; (iii) the human feedback is exploited not to name objects, but, rather, to align the hierarchy of substance concepts with that of classification concepts. The learning algorithm is implemented for the base case of a hierarchy of depth two. The experiments, though preliminary, show that the algorithm manages to acquire the notions of and with reasonable accuracy, this despite seeing a small number of examples and receiving supervision on a fraction of them.
Funder
European Commission
Università degli Studi di Trento
Publisher
Springer Science and Business Media LLC
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献