Abstract
This paper studies the gradient-based adversarial attacks on cluster-based, heterogeneous, multi-agent, deep reinforcement learning (MADRL) systems with time-delayed data transmission. The structure of the MADRL system consists of various clusters of agents. The deep Q-network (DQN) architecture presents the first cluster’s agent structure. The other clusters are considered as the environment of the first cluster’s DQN agent. We introduce two novel observations in data transmission, termed on-time and time-delay observations. The proposed observations are considered when the data transmission channel is idle, and the data is transmitted on time or delayed. By considering the distance between the neighboring agents, we present a novel immediate reward function by appending a distance-based reward to the previously utilized reward to improve the MADRL system performance. We consider three types of gradient-based attacks to investigate the robustness of the proposed system data transmission. Two defense methods are proposed to reduce the effects of the discussed malicious attacks. We have rigorously shown the system performance based on the DQN loss and the team reward for the entire team of agents. Moreover, the effects of the various attacks before and after using defense algorithms are demonstrated. The theoretical results are illustrated and verified with simulation examples.
Subject
Control and Optimization,Computer Networks and Communications,Instrumentation
Reference49 articles.
1. Reinforcement Learning: An Introduction;Sutton,2018
2. Multi-Agent Reinforcement Learning: A Review of Challenges and Applications
3. Deep reinforcement learning: An overview;Mousavi;Proceedings of the SAI Intelligent Systems Conference,2016
4. Deep Reinforcement Learning: A Brief Survey
5. An introduction to deep reinforcement learning;François-Lavet;arXiv,2018
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献