Abstract
Purpose
This study aims to explain the state-of-the-art machine learning models that are used in the intrusion detection problem for human-being understandable and study the relationship between the explainability and the performance of the models.
Design/methodology/approach
The authors study a recent intrusion data set collected from real-world scenarios and use state-of-the-art machine learning algorithms to detect the intrusion. The authors apply several novel techniques to explain the models, then evaluate manually the explanation. The authors then compare the performance of model post- and prior-explainability-based feature selection.
Findings
The authors confirm our hypothesis above and claim that by forcing the explainability, the model becomes more robust, requires less computational power but achieves a better predictive performance.
Originality/value
The authors draw our conclusions based on their own research and experimental works.
Subject
Computer Networks and Communications,Information Systems
Reference81 articles.
1. Principal component analysis;Wiley Interdisciplinary Reviews: computational Statistics,2010
2. A survey of network anomaly detection techniques;Journal of Network and Computer Applications,2016
3. Naive bayes vs decision trees in intrusion detection systems;SAC,2004
4. Feasibility of supervised machine learning for cloud security,2018
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献