Enhancing the Sustainability of Deep-Learning-Based Network Intrusion Detection Classifiers against Adversarial Attacks
-
Published:2023-06-19
Issue:12
Volume:15
Page:9801
-
ISSN:2071-1050
-
Container-title:Sustainability
-
language:en
-
Short-container-title:Sustainability
Author:
Alotaibi Afnan1, Rassam Murad A.12ORCID
Affiliation:
1. Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia 2. Faculty of Engineering and Information Technology, Taiz University, Taiz 6803, Yemen
Abstract
An intrusion detection system (IDS) is an effective tool for securing networks and a dependable technique for improving a user’s internet security. It informs the administration whenever strange conduct occurs. An IDS fundamentally depends on the classification of network packets as benign or attack. Moreover, IDSs can achieve better results when built with machine learning (ML)/deep learning (DL) techniques, such as convolutional neural networks (CNNs). However, there is a limitation when building a reliable IDS using ML/DL techniques, which is their vulnerability to adversarial attacks. Such attacks are crafted by attackers to compromise the ML/DL models, which affects their accuracy. Thus, this paper describes the construction of a sustainable IDS based on the CNN technique, and it presents a method for defense against adversarial attacks that enhances the IDS’s accuracy and ensures it is more reliable in performing classification. To achieve this goal, first, two IDS models with a convolutional neural network (CNN) were built to enhance the IDS accuracy. Second, seven adversarial attack scenarios were designed against the aforementioned CNN-based IDS models to test their reliability and efficiency. The experimental results show that the CNN-based IDS models achieved significant increases in the intrusion detection system accuracy of 97.51% and 95.43% compared with the scores before the adversarial scenarios were applied. Furthermore, it was revealed that the adversarial attacks caused the models’ accuracy to significantly decrease from one attack scenario to another. The Auto-PGD and BIM attacks had the strongest effect against the CNN-based IDS models, with accuracy drops of 2.92% and 3.46%, respectively. Third, this research applied the adversarial perturbation elimination with generative adversarial nets (APE_GAN++) defense method to enhance the accuracy of the CNN-based IDS models after they were affected by adversarial attacks, which was shown to increase after the adversarial attacks in an intelligible way, with accuracy scores ranging between 78.12% and 89.40%.
Subject
Management, Monitoring, Policy and Law,Renewable Energy, Sustainability and the Environment,Geography, Planning and Development,Building and Construction
Reference45 articles.
1. Machine learning, neural and statistical classification;Michie;Technometrics,1994 2. Chen, L., Kuang, X., Xu, A., Suo, S., and Yang, Y. (2020, January 5–6). A Novel Network Intrusion Detection System Based on CNN. Proceedings of the 2020 Eighth International Conference on Advanced Cloud and Big Data (CBD), Taiyuan, China. 3. Zhang, C., Costa-Perez, X., and Patras, P. (2020, January 9). Tiki-Taka: Attacking and Defending Deep Learning-Based Intrusion Detection Systems. Proceedings of the 2020 ACM SIGSAC Conference on Cloud Computing Security Workshop, New York, NY, USA. 4. Suo, H., Wan, J., Zou, C., and Liu, J. (2012, January 23–25). Security in the Internet of Things: A Review. Proceedings of the 2012 International Conference on Computer Science and Electronics Engineering, Hangzhou, China. 5. Liu, G., Khalil, I., and Khreishah, A. (2019, January 24–27). ZK-GanDef: A GAN Based Zero Knowledge Adversarial Training Defense for Neural Networks. Proceedings of the 2019 49th Annual IEEE/IFIP International Conference on Dependable Systems and Networks (DSN), Portland, OR, USA.
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|