Affiliation:
1. Center for Future Media, University of Electronic Science of China
Abstract
By adding human-imperceptible perturbations to images, DNNs can be easily fooled. As one of the mainstream methods, feature space targeted attacks perturb images by modulating their intermediate feature maps, for the discrepancy between the intermediate source and target features is minimized. However, the current choice of pixel-wise Euclidean Distance to measure the discrepancy is questionable because it unreasonably imposes a spatial-consistency constraint on the source and target features.
Intuitively, an image can be categorized as "cat'' no matter the cat is on the left or right of the image. To address this issue, we propose to measure this discrepancy using statistic alignment. Specifically, we design two novel approaches called Pair-wise Alignment Attack and Global-wise Alignment Attack, which attempt to measure similarities between feature maps by high-order statistics with translation invariance. Furthermore, we systematically analyze the layer-wise transferability with varied difficulties to obtain highly reliable attacks. Extensive experiments verify the effectiveness of our proposed method, and it outperforms the state-of-the-art algorithms by a large margin. Our code is publicly available at https://github.com/yaya-cheng/PAA-GAA.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Trustworthy machine learning in the context of security and privacy;International Journal of Information Security;2024-04-03
2. A Unified Optimization Framework for Feature-Based Transferable Attacks;IEEE Transactions on Information Forensics and Security;2024
3. Towards Boosting Black-Box Attack Via Sharpness-Aware;2023 IEEE International Conference on Multimedia and Expo (ICME);2023-07
4. Towards Transferable Targeted Adversarial Examples;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06
5. Dynamic Generative Targeted Attacks with Pattern Injection;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06