Affiliation:
1. University of Malaya, Kuala Lumpur, Malaysia
2. University of Hamburg, Hamburg, Germany
Abstract
Recent applications of autonomous agents and robots have brought attention to crucial trust-related challenges associated with the current generation of
artificial intelligence (AI)
systems. AI systems based on the connectionist deep learning neural network approach lack capabilities of explaining their decisions and actions to others, despite their great successes. Without symbolic interpretation capabilities, they are ‘black boxes’, which renders their choices or actions opaque, making it difficult to trust them in safety-critical applications. The recent stance on the explainability of AI systems has witnessed several approaches to
eXplainable Artificial Intelligence (XAI)
; however, most of the studies have focused on data-driven XAI systems applied in computational sciences. Studies addressing the increasingly pervasive goal-driven agents and robots are sparse at this point in time. This paper reviews approaches on explainable goal-driven intelligent agents and robots, focusing on techniques for explaining and communicating agents’ perceptual functions (e.g., senses, vision) and cognitive reasoning (e.g., beliefs, desires, intentions, plans, and goals) with humans in the loop. The review highlights key strategies that emphasize transparency, understandability, and continual learning for explainability. Finally, the paper presents requirements for explainability and suggests a road map for the possible realization of effective goal-driven explainable agents and robots.
Funder
Georg Forster Research Fellowship for Experienced Researchers
Alexander von Humboldt-Stiftung/Foundation and Impact Oriented Interdisciplinary Research
University of Malaya
German Research Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Reference231 articles.
1. Social Eye Gaze in Human-Robot Interaction: A Review
2. An agent-based approach to realize emergent configurations in the Internet of Things;Alkhabbas Fahed;Electronics,2020
3. Evaluating saliency map explanations for convolutional neural networks
4. Dan Amir and Ofra Amir. 2018. Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. International Foundation for Autonomous Agents and Multiagent Systems, International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, United States, 1168–1176.
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献