Affiliation:
1. University of Nevada, Reno, USA
2. General Motors, Detroit, USA
Abstract
In the field of multi-agent autonomous transportation, such as automated payload delivery or highway on-ramp merging, agents routinely exchange knowledge to optimize their shared objective and adapt to environmental novelties through Cooperative Multi-Agent Reinforcement Learning (CMARL) algorithms. This knowledge exchange between agents allows these systems to operate efficiently and adapt to dynamic environments. However, this cooperative learning process is susceptible to adversarial poisoning attacks, as highlighted by contemporary research. Particularly, the poisoning attacks where malicious agents inject deceptive information camouflaged within the differential noise, a pivotal element for differential privacy (DP)-based CMARL algorithms, pose formidable challenges to identify and overcome. The consequences of not addressing this issue are far-reaching, potentially jeopardizing safety-critical operations and the integrity of data privacy in these applications. Existing research has strived to develop anomaly detection based defense models to counteract conventional poisoning methods. Nonetheless, the recurring necessity for model offloading and retraining with labeled anomalous data undermines their practicality, considering the inherently dynamic nature of the safety-critical autonomous transportation applications. Further, it is imperative to maintain data privacy, ensure high performance, and adapt to environmental changes. Motivated by these challenges, this article introduces a novel defense mechanism against stealthy adversarial poisoning attacks in the autonomous transportation domain, termed
Reinforcing Autonomous Multi-agent Protection through Adversarial Resistance in Transportation
(RAMPART). Leveraging a GAN model at each local node, RAMPART effectively filters out malicious advice in an unsupervised manner while generating synthetic samples for each state-action pair to accommodate environmental uncertainties and eliminate the need for labeled training data. Our extensive experimental analysis, conducted in a private payload delivery network—a common application in the autonomous multi-agent transportation domain—demonstrates that RAMPART successfully defends against a DP-exploited poisoning attack with a 30% attack ratio, achieving an F1 score of 0.852 and accuracy of 96.3% in heavy traffic environments.
Funder
U.S. National Science Foundation
NSF-PFI-TT
Publisher
Association for Computing Machinery (ACM)
Reference41 articles.
1. Suleiman Abahussein, Tianqing Zhu, Dayong Ye, Zishuo Cheng, and Wanlei Zhou. 2023. Protect trajectory privacy in food delivery with differential privacy and multi-agent reinforcement learning. In Advanced Information Networking and Applications, Leonard Barolli (Ed.). Springer International Publishing, Cham, 48–59.
2. Xiaoyu Cao Jinyuan Jia and Neil Zhenqiang Gong. 2021. Data poisoning attacks to local differential privacy protocols. In Proceedings of the 30th USENIX Security Symposium (USENIX Security’21).
3. Deep multi-agent reinforcement learning for highway on-ramp merging in mixed traffic;Chen Dong;IEEE Transactions on Intelligent Transportation Systems,2023
4. MARNet: Backdoor attacks against cooperative multi-agent reinforcement learning;Chen Yanjiao;IEEE Transactions on Dependable and Secure Computing.,2022
5. Albert Cheu, Adam Smith, and Jonathan Ullman. 2021. Manipulation attacks in local differential privacy. In Proceedings of the 2021 IEEE Symposium on Security and Privacy (SP’21). IEEE, 883–900.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Reinforcement Learning-based Adaptive Digital Twin Model for Forests;2024 4th International Conference on Applied Artificial Intelligence (ICAPAI);2024-04-16