Author:
Cesarini Mirko,Malandri Lorenzo,Pallucchini Filippo,Seveso Andrea,Xing Frank
Abstract
AbstractThis paper addresses the notable gap in evaluating eXplainable Artificial Intelligence (XAI) methods for text classification. While existing frameworks focus on assessing XAI in areas such as recommender systems and visual analytics, a comprehensive evaluation is missing. Our study surveys and categorises recent post hoc XAI methods according to their scope of explanation and output format. We then conduct a systematic evaluation, assessing the effectiveness of these methods across varying scopes and levels of output granularity using a combination of objective metrics and user studies. Key findings reveal that feature-based explanations exhibit higher fidelity than rule-based ones. While global explanations are perceived as more satisfying and trustworthy, they are less practical than local explanations. These insights enhance understanding of XAI in text classification and offer valuable guidance for developing effective XAI systems, enabling users to evaluate each explainer’s pros and cons and select the most suitable one for their needs.
Funder
Università degli Studi di Milano - Bicocca
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献