Author:
Pande Sagar,Khamparia Aditya
Abstract
The research on Intrusion Detection Systems (IDSs) have been increasing in recent years. Particularly, the research which are widely utilizing machine learning concepts, and it is proven that these concepts were effective with IDSs, particularly, deep neural network-based models enhanced the rate of detections of IDSs. At the same instance, the models are turning out to be very highly complex, users are unable to track down the explanations for the decisions made which indicates the necessity of identifying the explanations behind those decisions to ensure the interpretability of the framed model. In this aspect, the article deals with the proposed model that able to explain the obtained predictions. The proposed framework is a combination of a conventional intrusion detection system with the aid of a deep neural network and interpretability of the model predictions. The proposed model utilizes Shapley Additive Explanations (SHAP) that mixes with the local explainability as well as the global explainability for the enhancement of interpretations in the case of intrusion detection systems. The proposed model was implemented using the popular dataset, NSL-KDD, and the performance of the framework evaluated using accuracy, precision, recall, and F1-score. The accuracy of the framework is achieved by about 99.99%. The proposed framework able to identify the top 4 features using local explainability and the top 20 features using global explainability.
Publisher
AGHU University of Science and Technology Press
Subject
Artificial Intelligence,Computational Theory and Mathematics,Computer Graphics and Computer-Aided Design,Computer Networks and Communications,Computer Vision and Pattern Recognition,Modeling and Simulation,Computer Science (miscellaneous)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献