Abstract
AbstractThis work presents Object Landmarks, a new type of visual feature designed for visual localization over major changes in distance and scale. An Object Landmark consists of a bounding box $${\mathbf {b}}$$
b
defining an object, a descriptor $${\mathbf {q}}$$
q
of that object produced by a Convolutional Neural Network, and a set of classical point features within $${\mathbf {b}}$$
b
. We evaluate Object Landmarks on visual odometry and place-recognition tasks, and compare them against several modern approaches. We find that Object Landmarks enable superior localization over major scale changes, reducing error by as much as 18% and increasing robustness to failure by as much as 80% versus the state-of-the-art. They allow localization under scale change factors up to 6, where state-of-the-art approaches break down at factors of 3 or more.
Publisher
Springer Science and Business Media LLC
Reference55 articles.
1. Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (surf). Computer Vision and Image Understanding, 110(3), 346–359. https://doi.org/10.1016/j.cviu.2007.09.014.
2. Bowman, S. L., Atanasov, N., Daniilidis, K., & Pappas, G. J. (2017). Probabilistic data association for semantic slam. In: 2017 IEEE international conference on robotics and automation (ICRA), pp. 1722–1729.
3. Brown, M., & Lowe, D. G. (2002). Invariant features from interest point groups. In: BMVC, Vol. 4.
4. Chen, Z., Liu, L., Sa, I., Ge, Z., & Chli, M. (2018). Learning context flexible attention model for long-term visual place recognition. IEEE Robotics and Automation Letters, 3(4), 4015–4022.
5. Cummins, M., & Newman, P. (2010) Invited Applications Paper FAB-MAP: Appearance-based place recognition and mapping using a learned visual vocabulary model. In: 27th International conference on machine learning (ICML2010).
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献