Provable observation noise robustness for neural network control systems

Author:

Krish VeenaORCID,Mata Andrew,Bak Stanley,Hobbs Kerianne,Rahmati Amir

Abstract

Abstract Neural networks are vulnerable to adversarial perturbations: slight changes to inputs that can result in unexpected outputs. In neural network control systems, these inputs are often noisy sensor readings. In such settings, natural sensor noise – or an adversary who can manipulate them – may cause the system to fail. In this paper, we introduce the first technique to provably compute the minimum magnitude of sensor noise that can cause a neural network control system to violate a safety property from a given initial state. Our algorithm constructs a tree of possible successors with increasing noise until a specification is violated. We build on open-loop neural network verification methods to determine the least amount of noise that could change actions at each step of a closed-loop execution. We prove that this method identifies the unsafe trajectory with the least noise that leads to a safety violation. We evaluate our method on four systems: the Cart Pole and LunarLander environments from OpenAI gym, an aircraft collision avoidance system based on a neural network compression of ACAS Xu, and the SafeRL Aircraft Rejoin scenario. Our analysis produces unsafe trajectories where deviations under $1{\rm{\% }}$ of the sensor noise range make the systems behave erroneously.

Publisher

Cambridge University Press (CUP)

Reference37 articles.

1. Owen, MP , Panken, A , Moss, R , Alvarez, L and Leeper, C (2019) ACAS Xu: integrated collision avoidance and detect and avoid capability for UAS. In 2019 IEEE/AIAA 38th Digital Avionics Systems Conference (DASC). https://doi.org/10.1109/DASC43569.2019.9081758.

2. Julian, KD , Lopez, J , Brush, JS , Owen, MP and Kochenderfer, MJ (2016) Policy compression for aircraft collision avoidance systems. In 2016 IEEE/AIAA 35th digital avionics systems conference (DASC). https://doi.org/10.1109/DASC.2016.7778091.

3. Set propagation techniques for reachability analysis;Althoff;Annual Review of Control, Robotics, and Autonomous Systems,2021

4. Zhang, H , Chen, H , Boning, D and Hsieh, C-J (2021) Robust reinforcement learning on state observations with learned optimal adversary. arXiv preprint arXiv:2101.08452.

5. Pinto, L , Davidson, J , Sukthankar, R and Gupta, A (2017) Robust adversarial reinforcement learning. In International conference on machine learning. PMLR, pp. 2817–2826.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Zero-One Attack: Degrading Closed-Loop Neural Network Control Systems using State-Time Perturbations;2024 ACM/IEEE 15th International Conference on Cyber-Physical Systems (ICCPS);2024-05-13

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3