Abstract
Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk (Go, Weiqi) have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have less explainable AI Baduk players as they make further progress? I shall show that the resolution of this apparent paradox depends on how we understand the crucial distinction between abduction and inference to the best explanation (IBE). Some further philosophical issues arising from explainable AI will also be discussed in connection with this distinction.
Subject
History and Philosophy of Science,Philosophy
Reference60 articles.
1. Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2. What does explainable AI really mean? A new conceptualization of perspectives;Doran;arXiv,2017
3. Towards a Rigorous Science of Interpretable Machine Learning
https://arxiv.org/abs/1702.08608
4. A Survey of Methods for Explaining Black Box Models
5. On Explainable AI and Abductive Inference
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献