Abstract
AbstractDiscriminative correlation filters (DCF) with powerful feature descriptors have proven to be very effective for advanced visual object tracking approaches. However, due to the fixed capacity in achieving discriminative learning, existing DCF trackers perform the filter training on a single template extracted by convolutional neural networks (CNN) or hand-crafted descriptors. Such single template learning cannot provide powerful discriminative filters with guaranteed validity under appearance variation. To pinpoint the structural relevance of spatio-temporal appearance to the filtering system, we propose a new tracking algorithm that incorporates the construction of the Grassmannian manifold learning in the DCF formulation. Our method constructs the model appearance within an online updated affine subspace. It enables joint discriminative learning in the origin and basis of the subspace, achieving enhanced discrimination and interpretability of the learned filters. In addition, to improve tracking efficiency, we adaptively integrate online incremental learning to update the obtained manifold. To this end, specific spatio-temporal appearance patterns are dynamically learned during tracking, highlighting relevant variations and alleviating the performance degrading impact of less discriminative representations from a single template. The experimental results obtained on several well-known datasets, i.e., OTB2013, OTB2015, UAV123, and VOT2018, demonstrate the merits of the proposed method and its superiority over the state-of-the-art trackers.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Reference80 articles.
1. Henriques, J. F., Rui, C., Martins, P., & Batista, J. (2015). High-speed tracking with kernelized correlation filters. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 583–596.
2. Wu, Y., Lim, J., & Yang, M. H. (2013). Online object tracking: a benchmark. In IEEE conference on computer vision and pattern recognition (pp. 2411–2418). Los Alamitos: IEEE.
3. Wu, Y., Lim, J., & Yang, M.-H. (2015). Object tracking benchmark. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(9), 1834–1848.
4. Mueller, M., Smith, N., & Ghanem, B. (2016). A benchmark and simulator for uav tracking. In European conference on computer vision (pp. 445–461). Berlin: Springer.
5. Kristan, M., Leonardis, A., Matas, J., Felsberg, M., Pflugfelder, R., Zajc, L. C., Vojir, T., Hager, G., Lukezic, A., Eldesokey, A., & Fernandez, G. (2017). The visual object tracking VOT2017 challenge results. In 2017 IEEE international conference on computer vision workshops (pp. 1949–1972). Los Alamitos: IEEE. https://doi.org/10.1109/ICCVW.2017.230.
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献