Abstract
Abstract
Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor noise – or an adversary who can manipulate them – may cause the system to fail. In this paper, we introduce the first technique to provably compute the minimum magnitude of sensor noise that can cause a neural network control system to violate a safety property from a given initial state. Our algorithm constructs a tree of possible successors with increasing noise until a specification is violated. We build on open-loop neural network verification methods to determine the least amount of noise that could change actions at each step of a closed-loop execution. We prove that this method identifies the unsafe trajectory with the least noise that leads to a safety violation. We evaluate our method on four systems: the Cart Pole and LunarLander environments from OpenAI gym, an aircraft collision avoidance system based on a neural network compression of ACAS Xu, and the SafeRL Aircraft Rejoin scenario. Our analysis produces unsafe trajectories where deviations under
$1{\rm{\% }}$
of the sensor noise range make the systems behave erroneously.
Publisher
Cambridge University Press (CUP)
Reference37 articles.
1. Owen, MP , Panken, A , Moss, R , Alvarez, L and Leeper, C (2019) ACAS Xu: integrated collision avoidance and detect and avoid capability for UAS. In 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC). https://doi.org/10.1109/DASC43569.2019.9081758.
2. Julian, KD , Lopez, J , Brush, JS , Owen, MP and Kochenderfer, MJ (2016) Policy compression for aircraft collision avoidance systems. In 2016 IEEE/AIAA 35th digital avionics systems conference (DASC). https://doi.org/10.1109/DASC.2016.7778091.
3. Set propagation techniques for reachability analysis;Althoff;Annual Review of Control, Robotics, and Autonomous Systems,2021
4. Zhang, H , Chen, H , Boning, D and Hsieh, C-J (2021) Robust reinforcement learning on state observations with learned optimal adversary. arXiv preprint arXiv:2101.08452.
5. Pinto, L , Davidson, J , Sukthankar, R and Gupta, A (2017) Robust adversarial reinforcement learning. In International conference on machine learning. PMLR, pp. 2817–2826.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献