Affiliation:
1. College of Information and Intelligence, Hunan Agricultural University, Changsha 410128, China
Abstract
Due to the low storage cost and high computation efficiency of hashing, cross-modal hashing has been attracting widespread attention in recent years. In this paper, we investigate how supervised cross-modal hashing (CMH) benefits from multi-label and contrastive learning (CL) by overcoming the following two challenges: (i) how to combine multi-label and supervised contrastive learning to consider diverse relationships among cross-modal instances, and (ii) how to reduce the sparsity of multi-label representation so as to improve the similarity measurement accuracy. To this end, we propose a novel cross-modal hashing framework, dubbed Multi-Label Weighted Contrastive Hashing (MLWCH). This framework involves compact consistent similarity representation, a new designed multi-label similarity calculation method that efficiently reduces the sparsity of multi-label by reducing redundant zero elements. Furthermore, a novel multi-label weighted contrastive learning strategy is developed to significantly improve hashing learning by assigning similarity weight to positive samples under both linear and non-linear similarities. Extensive experiments and ablation analysis over three benchmark datasets validate the superiority of our MLWCH method, especially over several outstanding baselines.
Funder
National Natural Science Foundation of China
Natural Science Foundation of Hunan Province
Scientific Research Project of Hunan Provincial Department of Education
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference70 articles.
1. Multimodal machine learning: A survey and taxonomy;Ahuja;IEEE Trans. Pattern Anal. Mach. Intell.,2018
2. Progressive learning with multi-scale attention network for cross-domain vehicle re-identification;Wang;Sci. China Inf. Sci.,2022
3. Deep multigraph hierarchical enhanced semantic representation for cross-modal retrieval;Zhu;IEEE MultiMedia,2022
4. Survey on deep multi-modal data analytics: Collaboration, rivalry, and fusion;Wang;ACM Trans. Multimed. Comput. Commun. Appl. (TOMM),2021
5. Qian, B., Wang, Y., Hong, R., and Wang, M. (2023, January 18–22). Adaptive Data-Free Quantization. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献