Affiliation:
1. Univ. Artois, CNRS, CRIL
2. Institut Universitaire de France
Abstract
Abductive explanations take a central place in eXplainable Artificial Intelligence (XAI) by clarifying with few features
the way data instances are classified. However, instances may have exponentially many minimum-size abductive explanations, and
this source of complexity holds even for ``intelligible'' classifiers, such as decision trees. When the number of such abductive explanations is huge,
computing one of them, only, is often not informative enough. Especially, better explanations than the one
that is derived may exist. As a way to circumvent this issue, we propose to leverage
a model of the explainee, making precise her / his preferences about explanations, and to compute only
preferred explanations. In this paper, several models are pointed out and discussed. For each model, we present and
evaluate an algorithm for computing preferred majoritary reasons, where majoritary reasons are specific abductive
explanations suited to random forests. We show that in practice the preferred majoritary reasons for an instance
can be far less numerous than its majoritary reasons.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献