Affiliation:
1. Defence Science and Technology Laboratory, UK
2. University of York and Assuring Autonomy International Programme, UK
Abstract
Machine learning has evolved into an enabling technology for a wide range of highly successful applications. The potential for this success to continue and accelerate has placed machine learning (ML) at the top of research, economic, and political agendas. Such unprecedented interest is fuelled by a vision of ML applicability extending to healthcare, transportation, defence, and other domains of great societal importance. Achieving this vision requires the use of ML in safety-critical applications that demand levels of assurance beyond those needed for current ML applications. Our article provides a comprehensive survey of the state of the art in the
assurance of ML
, i.e., in the generation of evidence that ML is sufficiently safe for its intended use. The survey covers the methods capable of providing such evidence at different stages of the
machine learning lifecycle
, i.e., of the complex, iterative process that starts with the collection of the data used to train an ML component for a system, and ends with the deployment of that component within the system. The article begins with a systematic presentation of the ML lifecycle and its stages. We then define assurance desiderata for each stage, review existing methods that contribute to achieving these desiderata, and identify open challenges that require further research.
Funder
Assuring Autonomy International Programme and the UKRI project
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference189 articles.
1. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2. Ajaya Adhikari D. M. Tax Riccardo Satta and Matthias Fath. 2018. Example and Feature importance-based Explanations for Black-box Machine Learning Models. arXiv:1812.09044. Retrieved from https://arxiv.org/abs/1812.09044. Ajaya Adhikari D. M. Tax Riccardo Satta and Matthias Fath. 2018. Example and Feature importance-based Explanations for Black-box Machine Learning Models. arXiv:1812.09044. Retrieved from https://arxiv.org/abs/1812.09044.
3. Assessing the Impact of Changing Environments on Classifier Performance
4. Augmented Reality Meets Computer Vision: Efficient Data Generation for Urban Driving Scenes
Cited by
130 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献