Author:
Stadlhofer Anja,Mezhuyev Vitaliy
Abstract
AbstractOne of the main reasons why machine learning (ML) methods are not yet widely used in productive business processes is the lack of confidence in the results of an ML model. To improve the situation, interpretability methods may be used, which provide insight into the internal structure of an ML model, and criteria, based on which the model makes a certain prediction. This paper aims to consider the state of the art in interpretability methods and apply the selected methods to an industrial use case. Two methods, called LIME and SHAP, were selected from the literature and next implemented in the use case for image classification using a convolutional neural network. The research methodology consists of three parts, the first is the literature analysis, followed by the practical implementation of an ML model for image classification and the subsequent application of the interpretability methods, and the third part is a multi-criteria comparison of selected LIME and SHAP methods. This work enables companies to select the most effective interpretability method according to their use case and also to increase companies’ motivation for using ML.
Publisher
Springer Science and Business Media LLC
Reference37 articles.
1. Oks SJ, Frietzsche A, Lehmann C (2016) The digitalization of industry from a strategic perspective. In: Presented at the R&D management conference from science to society: innovation and value creation, Cambridge, United Kingdom
2. Bonaccorso G (2017) A gentle introduction to machine learning. In: Machine learning algorithms—a reference guide to popular algorithms for data science and machine learning, Birmingham, United Kingdom, pp 6–9
3. Zhang X (2020) Machine learning. A matrix algebra approach to artificial intelligence, 1st edn. Springer, Singapore, pp 223–224
4. Dosilovic FK, Brcic M, Hlupic N (2018) Explainable artificial intelligence: a survey. In: Presented at the 41st international convention on MIPRO, Opatija, Croatia, pp 210–215
5. Bhatt U et al (2019) Explainable machine learning in deployment. In: Presented at proceedings of the 2020 conference on fairness, accountability and transparency, Cambridge, United Kingdom
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Pretrained Deep Learning Models to Reduce Data Needed for Quality Assurance;Proceedings of the 2024 10th International Conference on Computer Technology Applications;2024-05-15
2. Approaches for data collection and process standardization in smart manufacturing: Systematic literature review;Journal of Industrial Information Integration;2024-03
3. CapStyleBERT: Incorporating Capitalization and Style Information into BERT for Enhanced resumes parsing;Proceedings of the 2024 13th International Conference on Software and Computer Applications;2024-02
4. Principles of Machine Learning;Artificial Intelligence in Medical Imaging Technology;2024