Affiliation:
1. College of Science Liaoning University of Technology Jinzhou China
Abstract
AbstractThis paper proposes a simple and efficient adaptive event‐triggered optimized control (ETOC) scheme using reinforcement learning (RL) for stochastic nonlinear systems. The scheme includes an online state observer to estimate unmeasured states and a dynamically adjustable event‐triggered mechanism that reduces communication resources. The RL algorithm is based on the negative gradient of a simple positive function and employs the identifier‐actor‐critic architecture. The proposed ETOC approach is in the sensor‐to‐controller channel and directly activates control behavior through triggered states, which saves network resources. The theoretical analysis proves that all closed‐loop signals remain bounded under the proposed output‐feedback ETOC method. Overall, this paper presents a practical and effective ETOC scheme using RL for stochastic nonlinear systems, which has the potential to save communication resources and maintain closed‐loop signal stability. Finally, a simulation example is proposed to validate the presented control algorithm.
Subject
Applied Mathematics,Control and Optimization,Software,Control and Systems Engineering
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献