A Vision Dynamics Learning Approach to Robotic Navigation in Unstructured Environments
Author:
Ginerica Cosmin1, Zaha Mihai1, Floroian Laura1, Cojocaru Dorian2ORCID, Grigorescu Sorin1
Affiliation:
1. Robotics, Vision and Control Laboratory (ROVIS), Transilvania University of Brasov, 500036 Brasov, Romania 2. Electronics and Mechatronics, Department of Automatic Control, University of Craiova, 200585 Craiova, Romania
Abstract
Autonomous legged navigation in unstructured environments is still an open problem which requires the ability of an intelligent agent to detect and react to potential obstacles found in its area. These obstacles may range from vehicles, pedestrians, or immovable objects in a structured environment, like in highway or city navigation, to unpredictable static and dynamic obstacles in the case of navigating in an unstructured environment, such as a forest road. The latter scenario is usually more difficult to handle, due to the higher unpredictability. In this paper, we propose a vision dynamics approach to the path planning and navigation problem for a quadruped robot, which navigates in an unstructured environment, more specifically on a forest road. Our vision dynamics approach is based on a recurrent neural network that uses an RGB-D sensor as its source of data, constructing sequences of previous depth sensor observations and predicting future observations over a finite time span. We compare our approach with other state-of-the-art methods in obstacle-driven path planning algorithms and perform ablation studies to analyze the impact of architectural changes to our model components, demonstrating that our approach achieves superior performance in terms of successfully generating collision-free trajectories for the intelligent agent.
Funder
Romanian Executive Agency for Higher Education, Research, Development, and Innovation Funding
Reference22 articles.
1. Bojarski, M., Testa, D.D., Dworakowski, D., Firner, B., Flepp, B., Goyal, P., Jackel, L.D., Monfort, M., Muller, U., and Zhang, J. (2016). End to End Learning for Self-Driving Cars. arXiv. 2. Jaritz, M., de Charette, R., Toromanoff, M., Perot, E., and Nashashibi, F. (2018, January 21–25). End-to-End Race Driving with Deep Reinforcement Learning. Proceedings of the 2018 IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia. 3. Truong, J., Yarats, D., Li, T., Meier, F., Chernova, S., Batra, D., and Rai, A. (October, January 27). Learning Navigation Skills for Legged Robots with Learned Robot Embeddings. Proceedings of the 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic. 4. Seo, J., Mun, J., and Kim, T. (2023). Safe Navigation in Unstructured Environments by Minimizing Uncertainty in Control and Perception. arXiv. 5. Cross-Domain Object Detection for Autonomous Driving: A Stepwise Domain Adaptative YOLO Approach;Li;IEEE Trans. Intell. Veh.,2022
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|