Adversarial-Aware Deep Learning System Based on a Secondary Classical Machine Learning Verification Approach
Author:
Alkhowaiter Mohammed12ORCID, Kholidy Hisham3ORCID, Alyami Mnassar A.1, Alghamdi Abdulmajeed1, Zou Cliff1
Affiliation:
1. College of Engineering and Computer Science, University of Central Florida, Orlando, FL 32816, USA 2. College of Computer Engineering and Science, Prince Sattam Bin Abdulaziz University, Al-Kharj 11942, Saudi Arabia 3. College of Engineering, SUNY Polytechnic Institute, Utica, NY 13502, USA
Abstract
Deep learning models have been used in creating various effective image classification applications. However, they are vulnerable to adversarial attacks that seek to misguide the models into predicting incorrect classes. Our study of major adversarial attack models shows that they all specifically target and exploit the neural networking structures in their designs. This understanding led us to develop a hypothesis that most classical machine learning models, such as random forest (RF), are immune to adversarial attack models because they do not rely on neural network design at all. Our experimental study of classical machine learning models against popular adversarial attacks supports this hypothesis. Based on this hypothesis, we propose a new adversarial-aware deep learning system by using a classical machine learning model as the secondary verification system to complement the primary deep learning model in image classification. Although the secondary classical machine learning model has less accurate output, it is only used for verification purposes, which does not impact the output accuracy of the primary deep learning model, and, at the same time, can effectively detect an adversarial attack when a clear mismatch occurs. Our experiments based on the CIFAR-100 dataset show that our proposed approach outperforms current state-of-the-art adversarial defense systems.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference42 articles.
1. Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv. 2. Imagenet classification with deep convolutional neural networks;Krizhevsky;Commun. Acm,2017 3. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017). Attention is all you need. Adv. Neural Inf. Process. Syst., 30. 4. Alkhowaiter, M., Almubarak, K., and Zou, C. (2022, January 27–29). Evaluating perceptual hashing algorithms in detecting image manipulation over social media platforms. Proceedings of the 2022 IEEE International Conference on Cyber Security and Resilience (CSR), Rhodes, Greece. 5. Alkhowaiter, M., Almubarak, K., Alyami, M., Alghamdi, A., and Zou, C. (December, January 28). Image Authentication Using Self-Supervised Learning to Detect Manipulation Over Social Network Platforms. Proceedings of the MILCOM 2022-2022 IEEE Military Communications Conference (MILCOM), Rockville, MD, USA.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|