Abstract
The challenge of person re-identification (Re-ID) in intelligent security and smart city applications is compounded by pedestrian occlusion, which significantly reduces recognition accuracy due to the loss of feature information and the introduction of occlusion noise. To address this challenge, we propose a person Re-ID network based on multi-level feature fusion, enhancing recognition accuracy. Our network incorporates a feature extraction method that extracts both high level semantic and low level fine detail information from the pedestrian images, thereby improving the network's robustness against interference and variation. Furthermore, our network includes a feature fusion module that integrates global and local fine-grained features to enhance the model's generalization capability for Re-ID tasks. By incorporating a hard sample triplet loss, the proposed network effectively addresses inter-class similarity and intra-class differences. Our model achieves an mAP of 89.5% and Rank-1 accuracy of 95.8% on the Market-1501 dataset, outperforming all the existing methods.