Abstract
Recently, there has been a growing interest in the consensus of a multi-agent system (MAS) with advances in artificial intelligence and distributed computing. Sliding mode control (SMC) is a well-known method that provides robust control in the presence of uncertainties. While our previous study introduced SMC to the reinforcement learning (RL) based on approximate dynamic programming in the context of optimal control, SMC is introduced to a conventional RL framework in this work. As a specific realization, the modified twin delayed deep deterministic policy gradient (DDPG) for consensus was exploited to develop sliding mode RL. Numerical experiments show that the sliding mode RL outperforms existing state-of-the-art RL methods and model-based methods in terms of the mean square error (MSE) performance.
Funder
Ministry of Science and ICT
Ministry of Education of the Republic of Korea
National Research Foundation of Korea
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference31 articles.
1. Distributed Control Applications within Sensor Networks;Sinopoli;Proc. IEEE,2003
2. Zhang, X., and Papachristodoulou, A. (2014, January 4–6). A distributed PID controller for network congestion control problems. Proceedings of the American Control Conference, Portland, OR, USA.
3. Jia, D., and Krogh, B. (2002, January 8–10). Min-Max Feedback Model Predictive Control for Distributed Control with Communication. Proceedings of the American Control Conference, Anchorage, AK, USA.
4. A Survey on Model-Based Distributed Control and Filtering for Industrial Cyber-Physical Systems;Ding;IEEE Trans. Ind. Inform.,2019
5. Multi-Agent Systems: A Survey;Dorri;IEEE Access,2018