Affiliation:
1. School of Computer Science and Technology, Beijing Institute of Technology
2. School of Cyberspace Science and Technology, Beijing Institute of Technology
Abstract
Adversarial examples can fool deep learning models, and their transferability is critical for attacking black-box models in real-world scenarios. Existing state-of-the-art transferable adversarial attacks tend to exploit intrinsic features of objects to generate adversarial examples. This paper proposes the Random Patch Attack (RPA) to significantly improve the transferability of adversarial examples by the patch-wise random transformation that effectively highlights important intrinsic features of objects. Specifically, we introduce random patch transformations to original images to variate model-specific features. Important object-related features are preserved after aggregating the transformed images since they stay consistent in multiple transformations while model-specific elements are neutralized. The obtained essential features steer noises to perturb the object-related regions, generating the adversarial examples of superior transferability across different models. Extensive experimental results demonstrate the effectiveness of the proposed RPA. Compared to the state-of-the-art transferable attacks, our attacks improve the black-box attack success rate by 2.9\% against normally trained models, 4.7\% against defense models, and 4.6\% against vision transformers on average, reaching a maximum of 99.1\%, 93.2\%, and 87.8\%, respectively.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Transferable Multimodal Attack on Vision-Language Pre-training Models;2024 IEEE Symposium on Security and Privacy (SP);2024-05-19
2. Enhancing Adversarial Transferability in Object Detection with Bidirectional Feature Distortion;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
3. Enhancing Targeted Transferability VIA Feature Space Fine-Tuning;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Call White Black: Enhanced Image-Scaling Attack in Industrial Artificial Intelligence Systems;IEEE Transactions on Industrial Informatics;2024-04
5. NeuralSanitizer: Detecting Backdoors in Neural Networks;IEEE Transactions on Information Forensics and Security;2024