Affiliation:
1. CRIL, Université d'Artois & CNRS
2. Institut Universitaire de France
Abstract
In this paper, we investigate the computational intelligibility of Boolean classifiers,
characterized by their ability to answer XAI queries in polynomial time.
The classifiers under consideration are decision trees, DNF formulae, decision lists, decision rules, tree ensembles, and
Boolean neural nets. Using 9 XAI queries, including both explanation queries and verification queries,
we show the existence of large intelligibility gap between the families of classifiers. On the one hand, all the 9 XAI queries
are tractable for decision trees. On the other hand, none of them is tractable for DNF formulae, decision lists, random forests, boosted decision trees,
Boolean multilayer perceptrons, and binarized neural networks.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Assessing Decision Tree Stability: A Comprehensive Method for Generating a Stable Decision Tree;IEEE Access;2024
2. Logic for Explainable AI;2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS);2023-06-26
3. Disproving XAI Myths with Formal Methods – Initial Results;2023 27th International Conference on Engineering of Complex Computer Systems (ICECCS);2023-06-14
4. Tractability of explaining classifier decisions;Artificial Intelligence;2023-03
5. Feature Necessity & Relevancy in ML Classifier Explanations;Tools and Algorithms for the Construction and Analysis of Systems;2023