Addressing uncertainty in the safety assurance of machine-learning

Author:

Burton Simon,Herd Benjamin

Abstract

There is increasing interest in the application of machine learning (ML) technologies to safety-critical cyber-physical systems, with the promise of increased levels of autonomy due to their potential for solving complex perception and planning tasks. However, demonstrating the safety of ML is seen as one of the most challenging hurdles to their widespread deployment for such applications. In this paper we explore the factors which make the safety assurance of ML such a challenging task. In particular we address the impact of uncertainty on the confidence in ML safety assurance arguments. We show how this uncertainty is related to complexity in the ML models as well as the inherent complexity of the tasks that they are designed to implement. Based on definitions of uncertainty as well as an exemplary assurance argument structure, we examine typical weaknesses in the argument and how these can be addressed. The analysis combines an understanding of causes of insufficiencies in ML models with a systematic analysis of the types of asserted context, asserted evidence and asserted inference within the assurance argument. This leads to a systematic identification of requirements on the assurance argument structure as well as supporting evidence. We conclude that a combination of qualitative arguments combined with quantitative evidence are required to build a robust argument for safety-related properties of ML functions that is continuously refined to reduce residual and emerging uncertainties in the arguments after the function has been deployed into the target environment.

Funder

Fraunhofer-Gesellschaft

Publisher

Frontiers Media SA

Subject

Computer Science Applications,Computer Vision and Pattern Recognition,Human-Computer Interaction,Computer Science (miscellaneous)

Reference59 articles.

1. Testing deep learning-based visual perception for automated driving;Abrecht;ACM Trans. Cyber Phys. Syst,2021

2. The ‘k' in k-fold cross validation,;Anguita,2012

3. Assuring the machine learning lifecycle: Desiderata, methods, and challenges;Ashmore;ACM Comput. Surveys,2021

4. Assessing the overall sufficiency of safety arguments,;Ayoub,2013

5. How to reach complete safety requirement refinement for autonomous vehicles,;Bergenhem,2015

Cited by 8 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. CuneiForm Method for Assuring the Safety of ML-Based Computer Vision Development Datasets;2024 IEEE 32nd International Requirements Engineering Conference Workshops (REW);2024-06-24

2. Can you trust your ML metrics? Using Subjective Logic to determine the true contribution of ML metrics for safety;Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing;2024-04-08

3. Can you trust your Agent? The Effect of Out-of-Distribution Detection on the Safety of Reinforcement Learning Systems;Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing;2024-04-08

4. Machine Learning-Based Uranium Prospectivity Mapping and Model Explainability Research;Minerals;2024-01-24

5. Uncertainty-Aware Evaluation of Quantitative ML Safety Requirements;Lecture Notes in Computer Science;2024

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3