Author:
Wang Tu,Wang Fujie,Xie Zhongye,Qin Feiyan
Abstract
In uncertain environments with robot input saturation, both model-based reinforcement learning (MBRL) and traditional controllers struggle to perform control tasks optimally. In this study, an algorithmic framework of Curiosity Model Policy Optimization (CMPO) is proposed by combining curiosity and model-based approach, where tracking errors are reduced via training agents on control gains for traditional model-free controllers. To begin with, a metric for judging positive and negative curiosity is proposed. Constrained optimization is employed to update the curiosity ratio, which improves the efficiency of agent training. Next, the novelty distance buffer ratio is defined to reduce bias between the environment and the model. Finally, CMPO is simulated with traditional controllers and baseline MBRL algorithms in the robotic environment designed with non-linear rewards. The experimental results illustrate that the algorithm achieves superior tracking performance and generalization capabilities.
Reference41 articles.
1. Convex Optimization
2. Safe learning in robotics: from learning-based control to safe reinforcement learning;Brunke;Ann. Rev. Control, Robot. Auton. Syst,2022
3. Large-scale study of curiosity-driven learning;Burda;arXiv preprint arXiv:1808.04355
4. Exploration by random network distillation;Burda;arXiv preprint arXiv:1810.12894
5. Reinforcement learning-based fixed-time trajectory tracking control for uncertain robotic manipulators with input saturation;Cao;IEEE Trans. Neural Netw. Lear. Syst,2021