Affiliation:
1. Purdue University, USA
2. DEVCOM Army Research Laboratory, USA
Abstract
Given the recent impact of deep reinforcement learning in training agents to win complex games such as StarCraft and DoTA (Defense Of The Ancients)—there has been a surge in research for exploiting learning-based techniques for professional wargaming, battlefield simulation, and modeling. Real-time strategy games and simulators have become a valuable resource for operational planning and military research. However, recent work has shown that such learning-based approaches are highly susceptible to adversarial perturbations. In this paper, we investigate the robustness of an agent trained for a command and control task in an environment that is controlled by an active adversary. The C2 agent is trained on custom StarCraft II maps using the state-of-the-art RL algorithms—Asynchronous Advantage Actor Critic (A3C) and proximal policy optimization (PPO). We empirically show that an agent trained using these algorithms is highly susceptible to noise injected by the adversary and investigate the effects these perturbations have on the performance of the trained agent. Our work highlights the urgent need to develop more robust training algorithms especially for critical arenas like the battlefield.
Reference29 articles.
1. Blizzard. Starcraft II, https://starcraft2.blizzard.com
2. Valve. Dota 2, https://www.dota2.com/home
3. Grandmaster level in StarCraft II using multi-agent reinforcement learning
4. Google Deepmind. pysc2, https://github.com/google-deepmind/pysc2