Abstract
<p>Since 2016 a significant program of work has been developed under the title of explainable artificial intelligence (XAI). This program, prompted and extensively funded by DARPA, has sought to address the reasoning behind decisions or recommendations of AI systems. As AI systems often have concealed or <em>black box</em> characteristics, the problem of explainability is a significant challenge. XAI has been described as a movement rather than a single technology approach. Many thousands of papers have examined the problem, and diverse approaches have been put forward. One approach encouraged by DARPA has since become known as post-hoc reasoning, applying inductive reasoning. This research examines the claim to accuracy of post-hoc explanations, applying the perspective of the philosophy of technologyAs AI systems are already being used to determine who should have access to scarce resources and who should be punished and in what way, the accuracy of an explanation is an important ethical issue. . This paper asserts that technologists as experts in AI technology hold a unique ethical responsibility to clarify to a wider audience the limitations of knowledge about the workings of black box AI systems, and to avoid narratives that encourage uncritical acceptance of technology promise. The paper also proposes practical ways in which to approach the use of post-hoc reasoning where appropriate. </p>
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献