Inherent Diverse Redundant Safety Mechanisms for AI-Based Software Elements in Automotive Applications
Author:
Pitale Mandar Manohar1, Abbaspour Alireza2, Upadhyay Devesh3
Affiliation:
1. NVIDIA 2. Qualcomm Technologies Inc 3. SAAB Inc
Abstract
<div class="section abstract"><div class="htmlview paragraph">This paper explores the role and challenges of Artificial Intelligence (AI) algorithms, specifically AI-based software elements, in autonomous driving systems. These AI systems are fundamental in executing real-time critical functions in complex and high-dimensional environments. They handle vital tasks like multi-modal perception, cognition, and decision-making tasks such as motion planning, lane keeping, and emergency braking. A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data. This generalization issue becomes evident in real-time scenarios, where models frequently encounter inputs not represented in their training or validation data. In such cases, AI systems must still function effectively despite facing distributional or domain shifts. This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving. To mitigate these risks, methods for training AI models that help maintain performance without overconfidence are proposed. This involves implementing certainty reporting architectures and ensuring diverse training data. While various distribution-based methods exist to provide safety mechanisms for AI models, there is a noted lack of systematic assessment of these methods, especially in the context of safety-critical automotive applications. Many methods in the literature do not adapt well to the quick response times required in safety-critical edge applications. This paper reviews these methods, discusses their suitability for safety-critical applications, and highlights their strengths and limitations. The paper also proposes potential improvements to enhance the safety and reliability of AI algorithms in autonomous vehicles in the context of rapid and accurate decision-making processes.</div></div>
Publisher
SAE International
Reference33 articles.
1. Rabanser , S. , Günnemann , S. , and Lipton , Z. Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift Advances in Neural Information Processing Systems 32 2019 2. Melotti , G. , Premebida , C. , Bird , J.J. , Faria , D.R. et al. Reducing Overconfidence Predictions in Autonomous Driving Perception IEEE Access 10 2022 54805 54821 3. Yoon , Y. , Kim , T. , Lee , H. , and Park , J. Road-Aware Trajectory Prediction for Autonomous Driving on Highways Sensors 20 17 2020 4703 4. Kahn , G. , Villaflor , A. , Pong , V. , Abbeel , P. , and Levine , S. arXiv preprint arXiv:1702.01182 5. Serrà , J. , Álvarez , D. , Gómez , V. , Slizovskaia , O. , et al. arXiv preprint arXiv:1909.11480 2019
|
|