Explainable AI Evaluation: A Top-Down Approach for Selecting Optimal Explanations for Black Box Models
-
Published:2023-12-20
Issue:1
Volume:15
Page:4
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Mirzaei SeyedehRoksana1, Mao Hua1, Al-Nima Raid Rafi Omar2ORCID, Woo Wai Lok1ORCID
Affiliation:
1. Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne NE1 8ST, UK 2. Technical Engineering College of Mosul, Northern Technical University, Mosul 41001, Iraq
Abstract
Explainable Artificial Intelligence (XAI) evaluation has grown significantly due to its extensive adoption, and the catastrophic consequence of misinterpreting sensitive data, especially in the medical field. However, the multidisciplinary nature of XAI research resulted in diverse scholars possessing significant challenges in designing proper evaluation methods. This paper proposes a novel framework of a three-layered top-down approach on how to arrive at an optimal explainer, accenting the persistent need for consensus in XAI evaluation. This paper also investigates a critical comparative evaluation of explanations in both model agnostic and specific explainers including LIME, SHAP, Anchors, and TabNet, aiming to enhance the adaptability of XAI in a tabular domain. The results demonstrate that TabNet achieved the highest classification recall followed by TabPFN, and XGBoost. Additionally, this paper develops an optimal approach by introducing a novel measure of relative performance loss with emphasis on faithfulness and fidelity of global explanations by quantifying the extent to which a model’s capabilities diminish when eliminating topmost features. This addresses a conspicuous gap in the lack of consensus among researchers regarding how global feature importance impacts classification loss, thereby undermining the trust and correctness of such applications. Finally, a practical use case on medical tabular data is provided to concretely illustrate the findings.
Subject
Information Systems
Reference74 articles.
1. Karim, M.R., Islam, T., Beyan, O., Lange, C., Cochez, M., Rebholz-Schuhmann, D., and Decker, S. (2022). Explainable AI for Bioinformatics: Methods, Tools, and Applications. arXiv. 2. Explainable AI (XAI): A Systematic Meta-Survey of Current Challenges and Future Opportunities;Saeed;Knowl. Based Syst.,2023 3. Stassin, S., Englebert, A., Nanfack, G., Albert, J., Versbraegen, N., Peiffer, G., Doh, M., Riche, N., Frenay, B., and De Vleeschouwer, C. (2023). An Experimental Investigation into the Evaluation of Explainability Methods. arXiv. 4. Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13–17). “Why Should I Trust You?” Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY, USA. 5. Liao, Q.V., Zhang, Y., Luss, R., Doshi-Velez, F., and Dhurandhar, A. (2022, January 6–10). Connecting Algorithmic Research and Usage Contexts: A Perspective of Contextualized Evaluation for Explainable AI. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, Virtual.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|