Cross-Domain Person Re-Identification Based on Feature Fusion Invariance
-
Published:2024-05-28
Issue:11
Volume:14
Page:4644
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Zhang Yushi1, Song Heping2ORCID, Wei Jiawei3
Affiliation:
1. School of Information, North China University of Technology, Beijing 100144, China 2. School of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang 212013, China 3. School of Ethnology and History, Yunnan Minzu University, Kunming 650504, China
Abstract
Cross-domain person re-identification is a technique for identifying the same individual across different cameras or environments that necessitates the overcoming of challenges posed by scene variations, which is a primary challenge in person re-identification and a bottleneck for its practical applications. In this paper, we learn the invariance model of cross-domain feature fusion in a labeled source domain and an unlabeled target domain. First, our method learns the global and local fusion features of a person in the source domain by means of supervised learning with no component label and only person identification and obtains the fusion features of the person in the source and target domains by means of unsupervised learning. Based on person fusion features, this paper introduces feature memory to store the fused target features and designs a cross-domain invariance loss function to improve the cross-domain adaptability of the person. Finally, this paper carries out cross-domain person re-identification verification experiments between the Market-1501 and DukeMTMC-reID datasets; the experimental results show that the proposed method achieves significant performance improvement in cross-domain person re-identification.
Funder
National Natural Science Foundation of China
Reference37 articles.
1. Chen, H., Wang, Y., Shi, Y., and Yan, K. (2018, January 13–16). Deep transfer learning for person re-identification. Proceedings of the 2018 IEEE Fourth International Conference on Multimedia Big Data (BigMM), Xi’an, China. 2. Quan, R., Dong, X., Wu, Y., Zhu, L., and Yang, Y. (November, January 27). Auto-reid: Searching for a part-aware convnet for person re-identification. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 3. Guo, J., Yuan, Y., Huang, L., Zhang, C., Yao, J.G., and Han, K. (November, January 27). Beyond human parts: Dual part-aligned representations for person re-identification. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 4. Deep-person: Learning discriminative deep features for person re-identification;Bai;Pattern Recognit.,2020 5. Sun, Y., Xu, Q., Li, Y., Zhang, C., Li, Y., Wang, S., and Sun, J. (2019, January 15–20). Perceive where to focus: Learning visibility-aware part-level features for partial person re-identification. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.
|
|