Affiliation:
1. Department of Computer Science and Technology Southwest University of Science and Technology Mianyang China
Abstract
AbstractIn existing online multiple object tracking algorithms, schemes that combine object detection and re‐identification (ReID) tasks in a single model for simultaneous learning have drawn great attention due to their balanced speed and accuracy. However, different tasks require to focus different features. Learning two different tasks in the same model extracted features can lead to competition between the two tasks, making it difficult to achieve optimal performance. To reduce this competition, a task‐related attention network, which uses a self‐attention mechanism to allow each branch to learn on feature maps related to its task is proposed. Besides, a smooth gradient‐boosting loss function, which improves the quality of the extracted ReID features by gradually shifting the focus to the hard negative samples of each object during training is introduced. Extensive experiments on MOT16, MOT17, and MOT20 datasets demonstrate the effectiveness of the proposed method, which is also competitive in current mainstream algorithm.
Publisher
Institution of Engineering and Technology (IET)
Subject
Computer Vision and Pattern Recognition,Software
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献