Author:
Dasgupta Prithviraj,Collins Joseph
Abstract
Machine learning techniques are used extensively for automating various cybersecurity tasks. Most of these techniques use supervised learning algorithms that rely on training the algorithm to classify incoming data into categories, using data encountered in the relevant domain. A critical vulnerability of these algorithms is that they are susceptible to adversarial attacks by which a malicious entity called an adversary deliberately alters the training data to misguide the learning algorithm into making classification errors. Adversarial attacks could render the learning algorithm unsuitable for use and leave critical systems vulnerable to cybersecurity attacks. This article provides a detailed survey of the stateof-the-art techniques that are used to make a machine learning algorithm robust against adversarial attacks by using the computational framework of game theory. We also discuss open problems and challenges and possible directions for further research that would make deep machine learning–based systems more robust and reliable for cybersecurity tasks.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
31 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献