Affiliation:
1. CRIL, Univ Artois & CNRS
2. Institut Universitaire de France
Abstract
One of the key purposes of eXplainable AI (XAI) is to develop techniques for understanding predictions made by Machine Learning (ML) models and for assessing how much reliable they are. Several encoding schemas have recently been pointed out, showing how ML classifiers of various types can be mapped to Boolean circuits exhibiting the same input-output behaviours. Thanks to such mappings, XAI queries about classifiers can be delegated to the corresponding circuits. In this paper, we define new explanation and/or verification queries about classifiers. We show how they can be addressed by combining queries and transformations about the associated Boolean circuits. Taking advantage of previous results from the knowledge compilation map, this allows us to identify a number of XAI queries that are tractable provided that the circuit has been first turned into a compiled representation.
Publisher
International Joint Conferences on Artificial Intelligence Organization
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Knowledge compilation;Annals of Mathematics and Artificial Intelligence;2024-05-17
2. On Bounding the Behavior of Neurons;International Journal on Artificial Intelligence Tools;2024-04-25
3. Relative Keys: Putting Feature Explanation into Context;Proceedings of the ACM on Management of Data;2024-03-12
4. Logic for Explainable AI;2023 38th Annual ACM/IEEE Symposium on Logic in Computer Science (LICS);2023-06-26
5. Disjunctive Threshold Networks for Tabular Data Classification;IEEE Open Journal of the Computer Society;2023