Inherent Diverse Redundant Safety Mechanisms for AI-Based Software Elements in Automotive Applications

Author:

Pitale Mandar Manohar1,Abbaspour Alireza2,Upadhyay Devesh3

Affiliation:

1. NVIDIA

2. Qualcomm Technologies Inc

3. SAAB Inc

Abstract

<div class="section abstract"><div class="htmlview paragraph">This paper explores the role and challenges of Artificial Intelligence (AI) algorithms, specifically AI-based software elements, in autonomous driving systems. These AI systems are fundamental in executing real-time critical functions in complex and high-dimensional environments. They handle vital tasks like multi-modal perception, cognition, and decision-making tasks such as motion planning, lane keeping, and emergency braking. A primary concern relates to the ability (and necessity) of AI models to generalize beyond their initial training data. This generalization issue becomes evident in real-time scenarios, where models frequently encounter inputs not represented in their training or validation data. In such cases, AI systems must still function effectively despite facing distributional or domain shifts. This paper investigates the risk associated with overconfident AI models in safety-critical applications like autonomous driving. To mitigate these risks, methods for training AI models that help maintain performance without overconfidence are proposed. This involves implementing certainty reporting architectures and ensuring diverse training data. While various distribution-based methods exist to provide safety mechanisms for AI models, there is a noted lack of systematic assessment of these methods, especially in the context of safety-critical automotive applications. Many methods in the literature do not adapt well to the quick response times required in safety-critical edge applications. This paper reviews these methods, discusses their suitability for safety-critical applications, and highlights their strengths and limitations. The paper also proposes potential improvements to enhance the safety and reliability of AI algorithms in autonomous vehicles in the context of rapid and accurate decision-making processes.</div></div>

Publisher

SAE International

Reference33 articles.

1. Rabanser , S. , Günnemann , S. , and Lipton , Z. Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift Advances in Neural Information Processing Systems 32 2019

2. Melotti , G. , Premebida , C. , Bird , J.J. , Faria , D.R. et al. Reducing Overconfidence Predictions in Autonomous Driving Perception IEEE Access 10 2022 54805 54821

3. Yoon , Y. , Kim , T. , Lee , H. , and Park , J. Road-Aware Trajectory Prediction for Autonomous Driving on Highways Sensors 20 17 2020 4703

4. Kahn , G. , Villaflor , A. , Pong , V. , Abbeel , P. , and Levine , S. arXiv preprint arXiv:1702.01182

5. Serrà , J. , Álvarez , D. , Gómez , V. , Slizovskaia , O. , et al. arXiv preprint arXiv:1909.11480 2019

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3