Author:
Kuniyasu Ryo,Nakamura Tomoaki,Taniguchi Tadahiro,Nagai Takayuki
Abstract
We propose a method for multimodal concept formation. In this method, unsupervised multimodal clustering and cross-modal inference, as well as unsupervised representation learning, can be performed by integrating the multimodal latent Dirichlet allocation (MLDA)-based concept formation and variational autoencoder (VAE)-based feature extraction. Multimodal clustering, representation learning, and cross-modal inference are critical for robots to form multimodal concepts from sensory data. Various models have been proposed for concept formation. However, in previous studies, features were extracted using manually designed or pre-trained feature extractors and representation learning was not performed simultaneously. Moreover, the generative probabilities of the features extracted from the sensory data could be predicted, but the sensory data could not be predicted in the cross-modal inference. Therefore, a method that can perform clustering, feature learning, and cross-modal inference among multimodal sensory data is required for concept formation. To realize such a method, we extend the VAE to the multinomial VAE (MNVAE), the latent variables of which follow a multinomial distribution, and construct a model that integrates the MNVAE and MLDA. In the experiments, the multimodal information of the images and words acquired by a robot was classified using the integrated model. The results demonstrated that the integrated model can classify the multimodal information as accurately as the previous model despite the feature extractor learning in an unsupervised manner, suitable image features for clustering can be learned, and cross-modal inference from the words to images is possible.
Reference48 articles.
1. Deep Multimodal Subspace Clustering Networks;Abavisani;IEEE J. Sel. Top. Signal. Process.,2018
2. Online Joint Learning of Object Concepts and Language Model Using Multimodal Hierarchical Dirichlet Process;Aoki,2016
3. Long-Term Learning of Concept and Word by Robots: Interactive Learning Framework and Preliminary Results;Araki,2013
4. Integration of Various Concepts and Grounding of Word Meanings Using Multi-Layered Multimodal Lda for Sentence Generation;Attamimi,2014
5. Latent Dirichlet Allocation;Blei;J. Machine Learn. Res.,2003
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献