Author:
Krishnan Kiran G,Mohan Abhishek,Vishnu S.,Eapen Steve Abraham,Raj Amith,Jacob Jeevamma
Abstract
In complex planning and control operations and tasks like manipulating objects, assisting experts in various fields, navigating outdoor environments, and exploring uncharted territory, modern robots are designed to complement or completely replace humans. Even for those skilled in robot programming, designing a control schema for such robots to carry out these tasks is typically a challenging process that necessitates starting from scratch with a new and distinct controller for each task. The designer must consider the wide range of circumstances the robot might encounter. This kind of manual programming is typically expensive and time consuming. It would be more beneficial if a robot could learn the task on its own rather than having to be preprogrammed to perform all these tasks. In this paper, a method for the path planning of a robot in a known environment is implemented using Q-Learning by finding an optimal path from a specified starting and ending point.
Publisher
Inventive Research Organization
Reference10 articles.
1. [1] Ee Soong Low, Pauline Ong, Kah Chun Cheah, “Solving the optimal path planning of a mobile robot using improved Q-Learning”, International Journal for Robotics and Autonomous System(Elsevier), vol. 115, pp. 160-169, 2019.
2. [2] Iker Zamora, Nestor Gonzalez Lopez, Victor Mayoral Vilches, Alejandro Hernandez Cordero,”Extending the OpenAI Gym for robotics: a toolkit for reinforcement learning using ROS and Gazebo”. Erle Robotics, pp. 322-340, 2017.
3. [3] Khaled Alaa, Nicolo Botteghi, Beril Sirmacek, Mannes Poel,”Towards continuous control for mobile robot navigation: A Reinforcement Learning And SLAM based approach”, International Control Conference, pp. 932-940. May 2019.
4. [4] Murat Koseoglu, Orkan Murat Celik, Omer Pektas, “Design of an autonomous mobile robot based on ROS”, International Artificial Intelligence and Data Processing Symposium(IDAP), pp. 1024-1030, 2017.
5. [5] Lei Tai, Giuseppe Paolo, Ming Liu, “Virtual-to-real Deep Reinforcement Learning: continuous control of mobile robots for mapless navigation”,IEEE Transaction on Robotics and Automation, vol 35, pp. 799-816, 2019.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献