Image-based Navigation in Real-World Environments via Multiple Mid-level Representations: Fusion Models, Benchmark and Efficient Evaluation
-
Published:2023-10-26
Issue:8
Volume:47
Page:1483-1502
-
ISSN:0929-5593
-
Container-title:Autonomous Robots
-
language:en
-
Short-container-title:Auton Robot
Author:
Rosano Marco,Furnari Antonino,Gulino Luigi,Santoro Corrado,Farinella Giovanni Maria
Abstract
AbstractRobot visual navigation is a relevant research topic. Current deep navigation models conveniently learn the navigation policies in simulation, given the large amount of experience they need to collect. Unfortunately, the resulting models show a limited generalization ability when deployed in the real world. In this work we explore solutions to facilitate the development of visual navigation policies trained in simulation that can be successfully transferred in the real world. We first propose an efficient evaluation tool to reproduce realistic navigation episodes in simulation. We then investigate a variety of deep fusion architectures to combine a set of mid-level representations, with the aim of finding the best merge strategy that maximize the real world performances. Our experiments, performed both in simulation and on a robotic platform, show the effectiveness of the considered mid-level representations-based models and confirm the reliability of the evaluation tool. The 3D models of the environment and the code of the validation tool are publicly available at the following link: https://iplab.dmi.unict.it/EmbodiedVN/.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence
Reference75 articles.
1. Anderson, P., Shrivastava, A., Truong, J., Majumdar, A., Parikh, D., Batra, D., & Lee, S. (2020). Sim-to-real transfer for vision-and-language navigation. In Conference on robot learning (CoRL). 2. Anderson, P., Wu, Q., Teney, D., Bruce, J., Johnson, M., Sünderhauf, N., Reid, I., Gould, S., & van den Hengel, A. (2018). Vision-and-language navigation: Interpreting visually-grounded navigation instructions in real environments. In Conference on computer vision and pattern recognition (CVPR) (pp. 3674–3683). 3. Bonin-Font, F., Ortiz, A., & Oliver, G. (2008). Visual navigation for mobile robots: A survey. Journal of Intelligent and Robotic Systems (JINT), 53(3), 263. 4. Bousmalis, K., Silberman, N., Dohan, D., Erhan, D., & Krishnan, D. (2017). Unsupervised pixel-level domain adaptation with generative adversarial networks. In Conference on computer vision and pattern recognition (CVPR) (pp. 3722–3731). 5. Cadena, C., Carlone, L., Carrillo, H., Latif, Y., Scaramuzza, D., Neira, J., Reid, I., & Leonard, J. J. (2016). Past, present, and future of simultaneous localization and mapping: Toward the robust-perception age. IEEE Transactions on Robotics (T-RO), 32(6), 1309–1332.
|
|