Abstract
The use of AI and machine learning models in the industry is rapidly increasing. Because of this growth and the noticeable performance of these models, more mission-critical decision-making intelligent systems have been developed. Despite their success, when used for decision-making, AI solutions have a significant drawback: transparency. The lack of transparency behind their behaviors, particularly in complex state-of-the-art machine learning algorithms, leaves users with little understanding of how these models make specific decisions. To address this issue, algorithms such as LIME and SHAP (Kernel SHAP) have been introduced. These algorithms aim to explain AI models by generating data samples around an intended test instance by perturbing the various features. This process has the drawback of potentially generating invalid data points outside of the data domain. In this paper, we aim to improve LIME and SHAP by using a pre-trained Variational AutoEncoder (VAE) on the training dataset to generate realistic data around the test instance. We also employ a sensitivity feature importance with Boltzmann distribution to aid in explaining the behavior of the black-box model surrounding the intended test instance.
Publisher
University of Florida George A Smathers Libraries
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献