Using Reinforcement Learning to Escape Automatic Filter-based Adversarial Example Defense

Author:

Li Yantao1ORCID,Dan Kaijian2ORCID,Lei Xinyu3ORCID,Qin Huafeng4ORCID,Deng Shaojiang2ORCID,Zhou Gang5ORCID

Affiliation:

1. College of Computer Science, Chongqing University, Chongqing, China

2. College of Computer Science, Chongqing University, Chongqing China

3. Department of Computer Science, Michigan Technological University, Houghton, United States

4. School of Computer Science and Information Engineering, Chongqing Technology and Business University, Chongqing China

5. Computer Science Department, William & Mary, Williamsburg, United States

Abstract

Deep neural networks can be easily fooled by the adversarial example, which is a specially-crafted example with subtle and intentional perturbations. A plethora of papers have proposed to use filters to effectively defend against adversarial example attacks. However, we demonstrate that the automatic filter-based defenses may not be reliable. In this paper, we present URL2AED, by Using a Reinforcement Learning scheme TO escape the automatic filter-based Adversarial Example Defenses. Specifically, URL2AED uses a specially-crafted policy gradient reinforcement learning (RL) algorithm to generate adversarial examples (AEs) that can escape automatic filter-based AE defenses. In particular, we properly design reward functions in policy-gradient RL for targeted attacks and non-targeted attacks, respectively. Furthermore, we customize training algorithms to reduce the possible action space in policy-gradient RL to accelerate URL2AED training while still ensuring that URL2AED generates successful AEs. To demonstrate the performance of the proposed URL2AED, we conduct extensive experiments on three public datasets in terms of different perturbation degrees of parameter, different filter parameters, transferability, and time consumption. The experimental results show that URL2AED achieves high attack success rates for automatic filter-based defenses and good cross-model transferability.

Publisher

Association for Computing Machinery (ACM)

Reference66 articles.

1. Hercules: Deep Hierarchical Attentive Multilevel Fusion Model With Uncertainty Quantification for Medical Image Classification; al M. Abdar;IEEE Trans. Ind. Inform.,2023

2. Y. Taigman, M. Yang, M. Ranzato, L. Wolf, “Deepface: Closing the gap to human-level performance in face verification,” In Proc. CVPR, 2014, pp.1701–1708.

3. Interpreting Universal Adversarial Example Attacks on Image Classification Models; al Y. Ding;IEEE Trans. Dependable Secure Comput.,2023

4. T. Yang, S. Zhu, C. Chen, S. Yan, M. Zhang, and A. Willis, “Mutualnet: Adaptive convnet via mutual learning from network width and resolution,” in Proc. ECCV, 2020, pp. 299–315.

5. X. Li, A. You, Z. Zhu, H. Zhao, M. Yang, K. Yang, S. Tan, and Y. Tong, “Semantic flow for fast and accurate scene parsing,” in Proc. ECCV, 2020, pp. 775–793.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3