Affiliation:
1. Robotics and Advanced Manufacturing Department, Research Center for Advanced Studies (Cinvestav-Ipn), Ramos Arizpe 25903, Mexico
2. Facultad de Ciencias de la Administración, Universidad Autónoma de Coahuila, Saltillo 25280, Mexico
Abstract
Reinforcement learning (RL) is explored for motor control of a novel pneumatic-driven soft robot modeled after continuum media with a varying density. This model complies with closed-form Lagrangian dynamics, which fulfills the fundamental structural property of passivity, among others. Then, the question arises of how to synthesize a passivity-based RL model to control the unknown continuum soft robot dynamics to exploit its input–output energy properties advantageously throughout a reward-based neural network controller. Thus, we propose a continuous-time Actor–Critic scheme for tracking tasks of the continuum 3D soft robot subject to Lipschitz disturbances. A reward-based temporal difference leads to learning with a novel discontinuous adaptive mechanism of Critic neural weights. Finally, the reward and integral of the Bellman error approximation reinforce the adaptive mechanism of Actor neural weights. Closed-loop stability is guaranteed in the sense of Lyapunov, which leads to local exponential convergence of tracking errors based on integral sliding modes. Notably, it is assumed that dynamics are unknown, yet the control is continuous and robust. A representative simulation study shows the effectiveness of our proposal for tracking tasks.
Subject
Artificial Intelligence,Control and Optimization,Mechanical Engineering
Reference32 articles.
1. Looking Back on the Actor—Critic Architecture;Barto;IEEE Trans. Syst. Man Cybern. Syst.,2021
2. Adaptive Dynamic Programming: An Introduction;Wang;IEEE Comput. Intell. Mag.,2009
3. Lewis, F., Vrabie, D., and Syrmos, V. (2012). Optimal Control, Wiley. EngineeringPro Collection.
4. Composite adaptation and learning for robot control: A survey;Guo;Annu. Rev. Control.,2023
5. Robot manipulator control using neural networks: A survey;Jin;Neurocomputing,2018
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献