Deep Triplet Neural Networks with Cluster-CCA for Audio-Visual Cross-Modal Retrieval

Author:

Zeng Donghuo1ORCID,Yu Yi1,Oyama Keizo1

Affiliation:

1. National Institute of Informatics, SOKENDAI

Abstract

Cross-modal retrieval aims to retrieve data in one modality by a query in another modality, which has been a very interesting research issue in the field of multimedia, information retrieval, and computer vision, and database. Most existing works focus on cross-modal retrieval between text-image, text-video, and lyrics-audio. Little research addresses cross-modal retrieval between audio and video due to limited audio-video paired datasets and semantic information. The main challenge of the audio-visual cross-modal retrieval task focuses on learning joint embeddings from a shared subspace for computing the similarity across different modalities, where generating new representations is to maximize the correlation between audio and visual modalities space. In this work, we propose TNN-C-CCA, a novel deep triplet neural network with cluster canonical correlation analysis, which is an end-to-end supervised learning architecture with an audio branch and a video branch. We not only consider the matching pairs in the common space but also compute the mismatching pairs when maximizing the correlation. In particular, two significant contributions are made. First, a better representation by constructing a deep triplet neural network with triplet loss for optimal projections can be generated to maximize correlation in the shared subspace. Second, positive examples and negative examples are used in the learning stage to improve the capability of embedding learning between audio and video. Our experiment is run over fivefold cross validation, where average performance is applied to demonstrate the performance of audio-video cross-modal retrieval. The experimental results achieved on two different audio-visual datasets show that the proposed learning architecture with two branches outperforms existing six canonical correlation analysis–based methods and four state-of-the-art-based cross-modal retrieval methods.

Funder

JSPS Grant-in-Aid for Scientific Research

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture

Reference61 articles.

1. Sami Abu-El-Haija Nisarg Kothari Joonseok Lee Apostol (Paul) Natsev George Toderici Balakrishnan Varadarajan and Sudheendra Vijayanarasimhan. 2016. YouTube-8M: A large-scale video classification benchmark. arXiv:1609.08675. https://arxiv.org/pdf/1609.08675v1.pdf. Sami Abu-El-Haija Nisarg Kothari Joonseok Lee Apostol (Paul) Natsev George Toderici Balakrishnan Varadarajan and Sudheendra Vijayanarasimhan. 2016. YouTube-8M: A large-scale video classification benchmark. arXiv:1609.08675. https://arxiv.org/pdf/1609.08675v1.pdf.

2. ImageNet: A large-scale hierarchical image database

Cited by 35 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. BagFormer: Better cross-modal retrieval via bag-wise interaction;Engineering Applications of Artificial Intelligence;2024-10

2. Deep supervised fused similarity hashing for cross-modal retrieval;Multimedia Tools and Applications;2024-06-21

3. Anchor-aware Deep Metric Learning for Audio-visual Retrieval;Proceedings of the 2024 International Conference on Multimedia Retrieval;2024-05-30

4. An Intelligent Retrieval Method for Audio and Video Content: Deep Learning Technology Based on Artificial Intelligence;IEEE Access;2024

5. A Technical/Scientific Document Management Platform;Lecture Notes in Computer Science;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3