Affiliation:
1. School of Software Technology, Dalian University of Technology, China
2. College of Natural Resources and Environment, South China Agricultural University, China
Abstract
This article describes how with the rapid increasing of multimedia content on the Internet, the need for effective cross-modal retrieval has attracted much attention recently. Many related works ignore the latent semantic correlations of modalities in the non-linear space and the extraction of high-level modality features, which only focuses on the semantic mapping of modalities in linear space and the use of low-level artificial features as modality feature representation. To solve these issues, the authors first utilizes convolutional neural networks and topic modal to obtain a high-level semantic feature of various modalities. Sequentially, they propose a supervised learning algorithm based on a kernel with partial least squares that can capture semantic correlations across modalities. Finally, the joint model of different modalities is learnt by the training set. Extensive experiments are conducted on three benchmark datasets that include Wikipedia, Pascal and MIRFlickr. The results show that the proposed approach achieves better retrieval performance over several state-of-the-art approaches.
Subject
Computer Networks and Communications
Reference23 articles.
1. Continuum regression for cross-modal multimedia retrieval
2. Discriminative Dictionary Learning With Common Label Alignment for Cross-Modal Retrieval
3. The MIR flickr retrieval evaluation.;M. J.Huiskes;Proceedings of the 1st ACM international conference on Multimedia information retrieval,2008
4. Collecting image annotations using Amazon’s Mechanical Turk.;C.Rashtchian;Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon’s Mechanical Turk,2010
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献