Lightweight Multimodal Domain Generic Person Reidentification Metric for Person-Following Robots
Author:
Syed Muhammad AdnanORCID, Ou YongshengORCID, Li Tao, Jiang Guolai
Abstract
Recently, person-following robots have been increasingly used in many real-world applications, and they require robust and accurate person identification for tracking. Recent works proposed to use re-identification metrics for identification of the target person; however, these metrics suffer due to poor generalization, and due to impostors in nonlinear multi-modal world. This work learns a domain generic person re-identification to resolve real-world challenges and to identify the target person undergoing appearance changes when moving across different indoor and outdoor environments or domains. Our generic metric takes advantage of novel attention mechanism to learn deep cross-representations to address pose, viewpoint, and illumination variations, as well as jointly tackling impostors and style variations the target person randomly undergoes in various indoor and outdoor domains; thus, our generic metric attains higher recognition accuracy of target person identification in complex multi-modal open-set world, and attains 80.73% and 64.44% Rank-1 identification in multi-modal close-set PRID and VIPeR domains, respectively.
Funder
National Key Research and Development Program of China National Natural Science Foundation of China Guangdong Basic and Applied Basic Research Foundation Shenzhen Fundamental Research Program
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference55 articles.
1. Identification of a specific person using color, height, and gait features for a person following robot;Koide;Robot. Auton. Syst.,2016 2. Monocular person tracking and identification with on-line deep feature selection for person following robots;Koide;Robot. Auton. Syst.,2020 3. Ghimire, A., Zhang, X., Javed, S., Dias, J., and Werghi, N. (2022). Robot Person Following in Uniform Crowd Environment. arXiv. 4. Yan, B., Peng, H., Fu, J., Wang, D., and Lu, H. (2021, January 10–17). Learning Spatio-Temporal Transformer for Visual Tracking. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Montreal, QC, Canada. 5. Bhat, G., Danelljan, M., Gool, L.V., and Timofte, R. (November, January 27). Learning Discriminative Model Prediction for Tracking. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Seoul, Republic of Korea.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|