Author:
Kouremetis Michael,Lawrence Dean,Alford Ron,Cheuvront Zoe,Davila David,Geyer Benjamin,Haigh Trevor,Michalak Ethan,Murphy Rachel,Russo Gianpaolo
Abstract
AbstractAs the capabilities of cyber adversaries continue to evolve, now in parallel to the explosion of maturing and publicly-available artificial intelligence (AI) technologies, cyber defenders may reasonably wonder when cyber adversaries will begin to also field these AI technologies. In this regard, some promising (read: scary) areas of AI for cyber attack capabilities are search, automated planning, and reinforcement learning. As such, one possible defensive mechanism against future AI-enabled adversaries is that of cyber deception. To that end, in this work, we present and evaluate Mirage, an experimentation system demonstrated in both emulation and simulation forms that allows for the implementation and testing of novel cyber deceptions designed to counter cyber adversaries that use AI search and planning capabilities.
Publisher
Springer Science and Business Media LLC
Reference45 articles.
1. Al-Shaer E, Wei J, Kevin W et al (2019) Autonomous cyber deception. Springer
2. Applebaum A, Miller D, Strom B et al (2016) Intelligent, automated red team emulation. In: Proceedings of the 32nd annual conference on computer security applications. Association for Computing Machinery, New York, NY, USA, ACSAC ’16, pp 363–373
3. Applebaum A, Miller D, Strom B et al (2017) Analysis of automated adversary emulation techniques. In: Proceedings of the summer simulation multi-conference, pp 1–12
4. Bland JA, Petty MD, Whitaker TS et al (2020) Machine learning cyberattack and defense strategies. Comput Secur 92:101738
5. Brockman G, Cheung V, Pettersson L et al (2016) OpenAI gym. arXiv preprint arXiv:1606.01540