Abstract
Person re-identification(Re-ID) technology has been a research hotspot in intelligent video surveillance, which accurately retrieves specific pedestrians from massive video data. Most research focuses on the short-term scenarios of person Re-ID to deal with general problems, such as occlusion, illumination change, and view variance. The appearance change or similar appearance problem in the long-term scenarios has has not been the focus of past research. This paper proposes a novel Re-ID framework consisting of a two-branch model to fuse the appearance and gait feature to overcome covariate changes. Firstly, we extract the appearance features from a video sequence by ResNet50 and leverage average pooling to aggregate the features. Secondly, we design an improved gait representation to obtain a person’s motion information and exclude the effects of external covariates. Specifically, we accumulate the difference between silhouettes to form an active energy image (AEI) and then mask the mid-body part in the image with the Improved-Sobel-Masking operator to extract the final gait representation called ISMAEI. Thirdly, we combine appearance features with gait features to generate discriminative and robust fused features. Finally, the Euclidean norm is adopted to calculate the distance between probe and gallery samples for person Re-ID. The proposed method is evaluated on the CASIA Gait Database B and TUM-GAID datasets. Compared with state-of-the-art methods, experimental results demonstrate that it can perform better in both Rank-1 and mAP.
Funder
the National Natural Science Foundation of China
Key Laboratory of Integrated Automation of Process Industry
Subject
Process Chemistry and Technology,Chemical Engineering (miscellaneous),Bioengineering
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献