Affiliation:
1. SnT Centre, University of Luxembourg, Luxembourg, Luxembourg
2. SnT Centre, University of Luxembourg and School of EECS, University of Ottawa, Ottawa, Canada
Abstract
Deep neural networks (DNNs) have demonstrated superior performance over classical machine learning to support many features in safety-critical systems. Although DNNs are now widely used in such systems (e.g., self driving cars), there is limited progress regarding automated support for functional safety analysis in DNN-based systems. For example, the identification of root causes of errors, to enable both risk analysis and DNN retraining, remains an open problem. In this article, we propose SAFE, a black-box approach to automatically characterize the root causes of DNN errors. SAFE relies on a transfer learning model pre-trained on ImageNet to extract the features from error-inducing images. It then applies a density-based clustering algorithm to detect arbitrary shaped clusters of images modeling plausible causes of error. Last, clusters are used to effectively retrain and improve the DNN. The black-box nature of SAFE is motivated by our objective not to require changes or even access to the DNN internals to facilitate adoption.
Experimental results show the superior ability of SAFE in identifying different root causes of DNN errors based on case studies in the automotive domain. It also yields significant improvements in DNN accuracy after retraining, while saving significant execution time and memory when compared to alternatives.
Funder
Luxembourg’s National Research Fund
NSERC of Canada
Publisher
Association for Computing Machinery (ACM)
Reference86 articles.
1. Raja Ben Abdessalem, Shiva Nejati, Lionel C. Briand, and Thomas Stifter. 2018. Testing vision-based control systems using learnable evolutionary algorithms. In Proceedings of the 2018 IEEE/ACM 40th International Conference on Software Engineering. IEEE, 1016–1026.
2. Using Bayesian networks for root cause analysis in statistical process control
3. Saad Albawi, Tareq Abed Mohammed, and Saad Al-Zawi. 2017. Understanding of a convolutional neural network. In Proceedings of the 2017 International Conference on Engineering and Technology. IEEE, 1–6.
4. iNNvestigate neural networks!;Alber Maximilian;Journal of Machine Learning Research,2019
5. Authors of this paper. 2022. SAFE: toolset and replicability package. Retrieved 2022 from https://zenodo.org/record/6619279.
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. See the Forest, not Trees: Unveiling and Escaping the Pitfalls of Error-Triggering Inputs in Neural Network Testing;Proceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis;2024-09-11
2. Deep image clustering: A survey;Neurocomputing;2024-09
3. GIST
: Generated Inputs Sets Transferability in Deep Learning;ACM Transactions on Software Engineering and Methodology;2024-06-13
4. Supporting Safety Analysis of Image-processing DNNs through Clustering-based Approaches;ACM Transactions on Software Engineering and Methodology;2024-06-03
5. Test Optimization in DNN Testing: A Survey;ACM Transactions on Software Engineering and Methodology;2024-04-20