Affiliation:
1. Stanford University, Stanford, CA, USA
2. Dexterity, Inc., Redwood City, CA, USA
Abstract
When deploying machine learning models in high-stakes robotics applications, the ability to detect unsafe situations is crucial. Early warning systems can provide alerts when an unsafe situation is imminent (in the absence of corrective action). To reliably improve safety, these warning systems should have a provable false negative rate; that is, of the situations that are unsafe, fewer than ϵ will occur without an alert. In this work, we present a framework that combines a statistical inference technique known as conformal prediction with a simulator of robot/environment dynamics, in order to tune warning systems to provably achieve an ϵ false negative rate using as few as 1/ ϵ data points. We apply our framework to a driver warning system and a robotic grasping application, and empirically demonstrate the guaranteed false negative rate while also observing a low false detection (positive) rate.
Funder
NASA University Leadership Initiative
Subject
Applied Mathematics,Artificial Intelligence,Electrical and Electronic Engineering,Mechanical Engineering,Modeling and Simulation,Software
Reference43 articles.
1. Angelopoulos AN, Bates S, Malik J, et al. (2020) Uncertainty sets for image classifiers using conformal prediction. International Conference on Learning Representations, Vienna, Austria, May 7th, 2024 to May 11th, 2024.
2. Conformal prediction beyond exchangeability
3. nuScenes: A Multimodal Dataset for Autonomous Driving
4. Real-time Out-of-distribution Detection in Learning-Enabled Cyber-Physical Systems
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献