Abstract
AbstractWith the extensive application of deep learning (DL) algorithms in recent years, e.g., for detecting Android malware or vulnerable source code, artificial intelligence (AI) and machine learning (ML) are increasingly becoming essential in the development of cybersecurity solutions. However, sharing the same fundamental limitation with other DL application domains, such as computer vision (CV) and natural language processing (NLP), AI-based cybersecurity solutions are incapable of justifying the results (ranging from detection and prediction to reasoning and decision-making) and making them understandable to humans. Consequently, explainable AI (XAI) has emerged as a paramount topic addressing the related challenges of making AI models explainable or interpretable to human users. It is particularly relevant in cybersecurity domain, in that XAI may allow security operators, who are overwhelmed with tens of thousands of security alerts per day (most of which are false positives), to better assess the potential threats and reduce alert fatigue. We conduct an extensive literature review on the intersection between XAI and cybersecurity. Particularly, we investigate the existing literature from two perspectives: the applications of XAI to cybersecurity (e.g., intrusion detection, malware classification), and the security of XAI (e.g., attacks on XAI pipelines, potential countermeasures). We characterize the security of XAI with several security properties that have been discussed in the literature. We also formulate open questions that are either unanswered or insufficiently addressed in the literature, and discuss future directions of research.
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering
Reference143 articles.
1. 2018 reform of EU data protection rules. European Commission. May 25, 2018 (visited on 07/25/2022). https://ec.europa.eu/info/sites/default/files/data-protection-factsheet-changes_en.pdf
2. Adadi A, Berrada M (2018) Peeking inside the black-box: A survey on explainable artificial intelligence (XAI). In: IEEE Access, vol 6, pp 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
3. Adebayo J et al (2018) Sanity checks for saliency maps. In: Advances in neural information processing systems, p 31
4. Aguilar DL et al (2022) Towards an interpretable autoencoder: A decision tree-based autoencoder and its application in anomaly detection. In: IEEE transactions on dependable and secure computing
5. Ahmad MW, Reynolds J, Rezgui Y (2018) Predictive modelling for solar thermal energy systems: A comparison of support vector regression, random forest, extra trees and regression trees. In: Journal of cleaner production, vol 203, pp 810–821
Cited by
31 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献