Detection of adversarial attacks based on differences in image entropy
-
Published:2023-08-17
Issue:1
Volume:23
Page:299-314
-
ISSN:1615-5262
-
Container-title:International Journal of Information Security
-
language:en
-
Short-container-title:Int. J. Inf. Secur.
Author:
Ryu Gwonsang,Choi Daeseon
Abstract
AbstractAlthough deep neural networks (DNNs) have achieved high performance across various applications, they are often deceived by adversarial examples generated by adding small perturbations. To combat adversarial attacks, many detection methods have been proposed, including feature squeezing and trapdoor. However, these methods rely on the output of DNNs or involve training a separate network to detect adversarial examples, which leads to high computational costs and low efficiency. In this study, we propose a simple and effective approach called the entropy-based detector (EBD) to protect DNNs from various adversarial attacks. EBD detects adversarial examples by comparing the difference in entropy between the input sample before and after bit depth reduction. We show that EBD can detect over 98% of the adversarial examples generated by attacks using fast-gradient sign method, basic iterative method, momentum iterative method, DeepFool and CW attacks when the false positive rate is 2.5% for CIFAR-10 and ImageNet datasets.
Funder
the National Research Foundation of Korea(NRF) grant funded by the Korea government Institute of Information & communications Technology Planning & Evaluation (IITP) grant funded by the Korea governmen
Publisher
Springer Science and Business Media LLC
Subject
Computer Networks and Communications,Safety, Risk, Reliability and Quality,Information Systems,Software
Reference63 articles.
1. He, K., Zhang, X., Ren, S.: Deep residual learning for image recognition, In: Proceedings of the IEEE/CVF Computer Vision Pattern Recognition (CVPR), pp. 770–778, (2016) 2. Lin, T.Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection, In: Proceedings of the IEEE/CVF Computer Vision Pattern Recognition (CVPR), pp. 2117–2125, (2017) 3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A,N., Kaiser, Ł., Polosukhin, I.: Attention is all you need, Adv. Neural Info. Process. Syst. 30, (2017) 4. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks, In: Proceedings of International Conference on Learning Representations (ICLR), (2014) 5. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings, In: Proceedings of the IEEE European Symposium on Security and Privacy (SP), pp. 372–387, (2016)
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Reconstructing images with attention generative adversarial network against adversarial attacks;Journal of Electronic Imaging;2024-06-17 2. Pixel Map Analysis Adversarial Attack Detection on Transfer Learning Model;International Journal of Scientific Research in Computer Science, Engineering and Information Technology;2024-03-30 3. A Comprehensive Review on Adversarial Attack Detection Analysis in Deep Learning;International Journal of Scientific Research in Computer Science, Engineering and Information Technology;2023-11-10
|
|