Robustness and Transferability of Adversarial Attacks on Different Image Classification Neural Networks
-
Published:2024-01-31
Issue:3
Volume:13
Page:592
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Smagulova Kamilya1ORCID, Bacha Lina2, Fouda Mohammed E.3ORCID, Kanj Rouwaida2, Eltawil Ahmed1ORCID
Affiliation:
1. Division of Computer, Electrical and Mathematical Sciences and Engineering (CEMSE), King Abdullah University of Science and Technology, Thuwal 23955, Saudi Arabia 2. Department of Electrical and Computer Engineering, American University of Beirut, Beirut 1107 2020, Lebanon 3. Rain Neuromorphics, Inc., San Francisco, CA 94110, USA
Abstract
Recent works demonstrated that imperceptible perturbations to input data, known as adversarial examples, can mislead neural networks’ output. Moreover, the same adversarial sample can be transferable and used to fool different neural models. Such vulnerabilities impede the use of neural networks in mission-critical tasks. To the best of our knowledge, this is the first paper that evaluates the robustness of emerging CNN- and transformer-inspired image classifier models such as SpinalNet and Compact Convolutional Transformer (CCT) against popular white- and black-box adversarial attacks imported from the Adversarial Robustness Toolbox (ART). In addition, the adversarial transferability of the generated samples across given models was studied. The tests were carried out on the CIFAR-10 dataset, and the obtained results show that the level of susceptibility of SpinalNet against the same attacks is similar to that of the traditional VGG model, whereas CCT demonstrates better generalization and robustness. The results of this work can be used as a reference for further studies, such as the development of new attacks and defense mechanisms.
Funder
King Abdullah University of Science and Technology CRG program
Reference47 articles.
1. Sultana, F., Sufian, A., and Dutta, P. (2018, January 22–23). Advancements in image classification using convolutional neural network. Proceedings of the 2018 Fourth International Conference on Research in Computational Intelligence and Communication Networks (ICRCICN), Kolkata, India. 2. Han, S., Kang, J., Mao, H., Hu, Y., Li, X., Li, Y., Xie, D., Luo, H., Yao, S., and Wang, Y. (2017, January 22–24). Ese: Efficient speech recognition engine with sparse lstm on fpga. Proceedings of the 2017 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, Monterey, CA, USA. 3. Bhandare, A., Sripathi, V., Karkada, D., Menon, V., Choi, S., Datta, K., and Saletore, V. (2019). Efficient 8-bit quantization of transformer neural machine language translation model. arXiv. 4. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2013). Intriguing properties of neural networks. arXiv. 5. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., and Frossard, P. (2017, January 21–26). Universal adversarial perturbations. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|