Author:
Alfarra Motasem,Perez Juan C.,Thabet Ali,Bibi Adel,Torr Philip H.S.,Ghanem Bernard
Abstract
Deep neural networks are vulnerable to small input perturbations known as adversarial attacks. Inspired by the fact that these adversaries are constructed by iteratively minimizing the confidence of a network for the true class label, we propose the anti-adversary layer, aimed at countering this effect. In particular, our layer generates an input perturbation in the opposite direction of the adversarial one and feeds the classifier a perturbed version of the input. Our approach is training-free and theoretically supported. We verify the effectiveness of our approach by combining our layer with both nominally and robustly trained models and conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and ImageNet. Our layer significantly enhances model robustness while coming at no cost on clean accuracy.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Comprehensive Survey on Test-Time Adaptation Under Distribution Shifts;International Journal of Computer Vision;2024-07-18
2. Unifying Gradients to Improve Real-World Robustness for Deep Networks;ACM Transactions on Intelligent Systems and Technology;2023-11-14
3. Test-Time Adversarial Detection and Robustness for Localizing Humans Using Ultra Wide Band Channel Impulse Responses;2023 31st European Signal Processing Conference (EUSIPCO);2023-09-04
4. Visual Prompting for Adversarial Robustness;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04
5. Angelic Patches for Improving Third-Party Object Detector Performance;2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR);2023-06