Author:
Medianovskyi Kyrylo,Pietarinen Ahti-Veikko
Abstract
Modern explainable AI (XAI) methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning (ML) algorithms perform genuinely abductive inferences. The paper outlines the key predicament in the current inductive paradigm of ML and the associated XAI techniques, and sketches the desiderata for a truly participatory, second-generation XAI, which is endowed with abduction.
Subject
History and Philosophy of Science,Philosophy
Reference42 articles.
1. Integrating Abduction and Induction in Machine Learning
2. Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
3. Explanation and Understanding;von Wright,1971
4. Semantics of Questions and Questions of Semantics;Hintikka,1975
5. Knowledge and Belief: An Introduction to Logic of the Two Notions;Hintikka,1962
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献