Dynamic Path Planning using a modification Q-Learning Algorithm for a Mobile Robot

Author:

Fallooh Noor H.,Sadiq Ahmed T.,Abbas Eyad I.,hashim Ivan A.

Abstract

Robot navigation involves a challenging task: path planning for a mobile robot operating in a changing environment. This work presents an enhanced Q-learning based path planning technique. For mobile robots operating in dynamic environments, an algorithm and a few heuristic searching techniques are suggested. Enhanced Q-learning employs a novel exploration approach that blends Boltzmann and ε-greedy exploration. Heuristic searching techniques are also offered in order to constrict the orientation angle variation range and narrow the search space. In the meantime, the robotics literature of the energy field notes that the decrease in orientation angle and path length is significant. A dynamic reward is suggested to help the mobile robot approach the target location in order to expedite the convergence of the Q-learning and shorten the computation time. There are two sections to the experiments: quick and reassured route planning. With quickly path planning, the mobile robot can reach the objective with the best path length, and with secure path planning, it can avoid obstacles. The superior performance of the suggested strategy is quick and reassured 8-connection Q-learning (Q8CQL) was validated by simulations, comparing it to classical Q-learning and other planning methods in terms of time taken and ideal path.

Publisher

EDP Sciences

Reference30 articles.

1. Wang P., Chan Ch. and de Fortelle A.-L., A Reinforcement Learning Based Approach for Automated Lane Change Maneuvers, 2018 IEEE Intelligent Vehicles Symposium (IV), China, 2018.

2. Naeem M., Rizvi S.T.H., and Coronato A., A Gentle Introduction to Reinforcement Learning and Its Application in Different Fields”, IEEE ACCESS 8, 2020.

3. Tong G.U.O., Jiang N., Biyue L.I., Xi Z.H.U., Wang Y., UAV navigation in high dynamic environments: A deep reinforcement learning approach, Chinese Journal of Aeronautics, Production and hosting by Elsevier 2020.

4. Kulkarni Parag, “Reinforcement and Systemic Machine Learning for Decision Making”, Published by John Wiley & Sons, Inc., Hoboken, Published simultaneously in Canada (IEEE series on systems science and engineering; ISBN 9780-470-91999-6.

5. Watkins Chris, “Learning from Delayed Rewards”, thesis submitted for phd in, king’s college, London, 1989.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3