Margin-Based Modal Adaptive Learning for Visible-Infrared Person Re-Identification
Author:
Zhao Qianqian1ORCID, Wu Hanxiao1, Zhu Jianqing23
Affiliation:
1. College of Information Science and Engineering, Huaqiao University, Xiamen 361021, China 2. College of Engineering, Huaqiao University, Quanzhou 362021, China 3. Xiamen Yealink Network Technology Company Limited, Xiamen 361015, China
Abstract
Visible-infrared person re-identification (VIPR) has great potential for intelligent transportation systems for constructing smart cities, but it is challenging to utilize due to the huge modal discrepancy between visible and infrared images. Although visible and infrared data can appear to be two domains, VIPR is not identical to domain adaptation as it can massively eliminate modal discrepancies. Because VIPR has complete identity information on both visible and infrared modalities, once the domain adaption is overemphasized, the discriminative appearance information on the visible and infrared domains would drain. For that, we propose a novel margin-based modal adaptive learning (MMAL) method for VIPR in this paper. On each domain, we apply triplet and label smoothing cross-entropy functions to learn appearance-discriminative features. Between the two domains, we design a simple yet effective marginal maximum mean discrepancy (M3D) loss function to avoid an excessive suppression of modal discrepancies to protect the features’ discriminative ability on each domain. As a result, our MMAL method could learn modal-invariant yet appearance-discriminative features for improving VIPR. The experimental results show that our MMAL method acquires state-of-the-art VIPR performance, e.g., on the RegDB dataset in the visible-to-infrared retrieval mode, the rank-1 accuracy is 93.24% and the mean average precision is 83.77%.
Funder
National Natural Science Foundation of China Natural Science Foundation for Outstanding Young Scholars of Fujian Province
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference78 articles.
1. Wu, Z., and Wen, T. (2022). Minimizing Maximum Feature Space Deviation for Visible-infrared Person Re-identification. Appl. Sci., 12. 2. Ye, M., Lan, X., Li, J., and Yuen, P. (February, January 27). Hierarchical Discriminative Learning for Visible Thermal Person Re-identification. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA. 3. Ye, M., Wang, Z., Lan, X., and Yuen, P. (2018, January 13–19). Visible Thermal Person Re-identification via Dual-constrained Top-ranking. Proceedings of the International Joint Conference on Artificial Intelligence, Stockholm, Sweden. 4. Dai, H., Xie, Q., Ma, Y., Liu, Y., and Xiong, S. (2021, January 10–15). RGB-infrared Person Re-identification via Image Modality Conversion. Proceedings of the International Conference on Pattern Recognition, Milan, Italy. 5. Parameter Sharing Exploration and Hetero-center Triplet Loss for Visible-thermal Person Re-identification;Liu;IEEE Trans. Multimed.,2021
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|