Abstract
AbstractThis paper summarizes the psychological insights and related design challenges that have emerged in the field of Explainable AI (XAI). This summary is organized as a set of principles, some of which have recently been instantiated in XAI research. The primary aspects of implementation to which the principles refer are the design and evaluation stages of XAI system development, that is, principles concerning the design of explanations and the design of experiments for evaluating the performance of XAI systems. The principles can serve as guidance, to ensure that AI systems are human-centered and effectively assist people in solving difficult problems.
Funder
711th Human Performance Wing
Australian Research Council
Publisher
Springer Science and Business Media LLC
Reference83 articles.
1. Abdollahi B, Nasraoui O (2016) Explainable restricted Boltzmann machines for collaborative filtering. [arXiv:1606.07129v1]
2. Adadi A, Berrada M (2018) Peeking inside the black-box: a survey on explainable artificial intelligence. IEEE Access 6:52138–52160 [https://doi.org/10.1109/ACCESS.2018.2870052]
3. Akula A, Wang S, Zhu S-C (2020) CoCoX: Generating conceptual and counterfactual Explanations via Fault-Lines. Proc AAAI Conf Artif Intell 34(3):2594–2601
4. Amarasinghe K, Rodolfa KT, Jesus S, Chen V, Balayan V, Saleiro P, Bizarro P, Talwalkar A, Ghani R (2022) On the importance of application-grounded experimental design for evaluating explainable ML methods. [downloaded 29 January 2023 from arXiv:2206.13503].
5. Anderson A, Dodge J, Sadarangani A, Juozapaitis Z, Newman E, Irvine J, Chattopadhyay S, Fern A, Burnett M (2020) Mental models of mere mortals with explanations of reinforcement learning. ACM Transactions on Interactive Intelligent Systems (TiiS). [https://doi.org/10.1145/3366485]
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献