Using Reinforcement Learning to Escape Automatic Filter-based Adversarial Example Defense
-
Published:2024-08-15
Issue:
Volume:
Page:
-
ISSN:1550-4859
-
Container-title:ACM Transactions on Sensor Networks
-
language:en
-
Short-container-title:ACM Trans. Sen. Netw.
Author:
Li Yantao1ORCID, Dan Kaijian2ORCID, Lei Xinyu3ORCID, Qin Huafeng4ORCID, Deng Shaojiang2ORCID, Zhou Gang5ORCID
Affiliation:
1. College of Computer Science, Chongqing University, Chongqing, China 2. College of Computer Science, Chongqing University, Chongqing China 3. Department of Computer Science, Michigan Technological University, Houghton, United States 4. School of Computer Science and Information Engineering, Chongqing Technology and Business University, Chongqing China 5. Computer Science Department, William & Mary, Williamsburg, United States
Abstract
Deep neural networks can be easily fooled by the adversarial example, which is a specially-crafted example with subtle and intentional perturbations. A plethora of papers have proposed to use filters to effectively defend against adversarial example attacks. However, we demonstrate that the automatic filter-based defenses may not be reliable. In this paper, we present URL2AED, by Using a Reinforcement Learning scheme TO escape the automatic filter-based Adversarial Example Defenses. Specifically, URL2AED uses a specially-crafted policy gradient reinforcement learning (RL) algorithm to generate adversarial examples (AEs) that can escape automatic filter-based AE defenses. In particular, we properly design reward functions in policy-gradient RL for targeted attacks and non-targeted attacks, respectively. Furthermore, we customize training algorithms to reduce the possible action space in policy-gradient RL to accelerate URL2AED training while still ensuring that URL2AED generates successful AEs. To demonstrate the performance of the proposed URL2AED, we conduct extensive experiments on three public datasets in terms of different perturbation degrees of parameter, different filter parameters, transferability, and time consumption. The experimental results show that URL2AED achieves high attack success rates for automatic filter-based defenses and good cross-model transferability.
Publisher
Association for Computing Machinery (ACM)
Reference66 articles.
1. Hercules: Deep Hierarchical Attentive Multilevel Fusion Model With Uncertainty Quantification for Medical Image Classification; al M. Abdar;IEEE Trans. Ind. Inform.,2023 2. Y. Taigman, M. Yang, M. Ranzato, L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” In Proc. CVPR, 2014, pp.1701–1708. 3. Interpreting Universal Adversarial Example Attacks on Image Classification Models; al Y. Ding;IEEE Trans. Dependable Secure Comput.,2023 4. T. Yang, S. Zhu, C. Chen, S. Yan, M. Zhang, and A. Willis, “Mutualnet: Adaptive convnet via mutual learning from network width and resolution,” in Proc. ECCV, 2020, pp. 299–315. 5. X. Li, A. You, Z. Zhu, H. Zhao, M. Yang, K. Yang, S. Tan, and Y. Tong, “Semantic flow for fast and accurate scene parsing,” in Proc. ECCV, 2020, pp. 775–793.
|
|