DOMOPT: A Detection-Based Online Multi-Object Pedestrian Tracking Network for Videos
-
Published:2023-07
Issue:09
Volume:37
Page:
-
ISSN:0218-0014
-
Container-title:International Journal of Pattern Recognition and Artificial Intelligence
-
language:en
-
Short-container-title:Int. J. Patt. Recogn. Artif. Intell.
Author:
Huan Ruohong1ORCID,
Zheng Shuaishuai1,
Xie Chaojie1,
Chen Peng1,
Liang Ronghua1
Affiliation:
1. College of Computer Science and Technology, Zhejiang University of Technology, Hangzhou, Zhejiang, China
Abstract
Due to the problem of low tracking accuracy and weak tracking stability of current multi-object pedestrian tracking algorithms in complex scenes for videos, a Detection-based Online Multi-Object Pedestrian Tracking (DOMOPT) network is proposed. First, a Multi-Level Feature Fusion (MLFF) pedestrian detection network is proposed based on the Center and Scale Prediction (CSP) algorithm. The pyramid convolutional neural network is used as the backbone to enhance the feature extraction capability for small objects. The shallow features and deep features at multiple levels are integrated to fully obtain the position and semantic information to further improve the detection performance for small objects. Then, on the basis of Joint Detection and Embedding (JDE) architecture, a Multi-Branch Pedestrian Appearance (MBPA) feature extraction network is proposed and added into the pedestrian detection network to extract the appearance feature vector corresponding to each pedestrian. The pedestrian appearance feature extraction is treated as a classification task jointly training with the pedestrian detection task, using the multi-task learning strategy. Experimental results show that the proposed network has better tracking accuracy and stability compared with state-of-the-art algorithms.
Funder
National Natural Science Foundation of China
Publisher
World Scientific Pub Co Pte Ltd
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献