Abstract
This work presents a Deep Reinforcement Learning algorithm to control a differentially driven mobile robot. This study seeks to explain the influence of different definitions of the environment with a mobile robot on the learning process. In our study, we focus on the Reinforcement Learning algorithm called Deep Deterministic Policy Gradient, which is applicable to continuous action problems. We investigate the effectiveness of different noises, inputs, and cost functions in the neural network learning process. To examine the feature of the presented algorithm, a number of simulations were run, and their results are presented. In the simulations, the mobile robot had to reach a target position in a way that minimizes distance error. Our goal was to optimize the learning process. By analyzing the results, we wanted to recommend a more efficient choice of input and cost functions for future research.
Funder
This research was funded by Ministry of Education and Science
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献