Affiliation:
1. Department of Avionics, Indian Institute of Space Science and Technology, Trivandrum 695547, Kerala, India
Abstract
Recently, to address the multiple object tracking (MOT) problem, we harnessed the power of deep learning-based methods. The tracking-by-detection approach to multiple object tracking (MOT) involves two primary steps: object detection and data association. In the first step, objects of interest are detected in each frame of a video. The second step establishes the correspondence between these detected objects across different frames to track their trajectories. This paper proposes an efficient and unified data association method that utilizes a deep feature association network (deepFAN) to learn the associations. Additionally, the Structural Similarity Index Metric (SSIM) is employed to address uncertainties in the data association, complementing the deep feature association network. These combined association computations effectively link the current detections with the previous tracks, enhancing the overall tracking performance. To evaluate the efficiency of the proposed MOT framework, we conducted a comprehensive analysis of the popular MOT datasets, such as the MOT challenge and UA-DETRAC. The results showed that our technique performed substantially better than the current state-of-the-art methods in terms of standard MOT metrics.
Reference66 articles.
1. On detection, data association and segmentation for multi-target tracking;Tian;IEEE Trans. Pattern Anal. Mach. Intell.,2018
2. Wen, L., Du, D., Li, S., Bian, X., and Lyu, S. (2018). Learning nonuniform hypergraph for multi-object racking. arXiv.
3. Heterogeneous association graph fusion for target association in multiple object tracking;Sheng;IEEE Trans. Circuits Syst. Video Technol.,2018
4. Object Detection with Discriminatively Trained Part-Based Models;Felzenszwalb;IEEE Trans. Pattern Anal. Mach. Intell.,2010
5. Spatial pyramid pooling in deep convolutional networks for visual recognition;He;IEEE Trans. Pattern Anal. Mach. Intell.,2015