Affiliation:
1. University of Piraeus, Piraeus, Greece
Abstract
Interpretability, explainability, and transparency are key issues to introducing artificial intelligence methods in many critical domains. This is important due to ethical concerns and trust issues strongly connected to reliability, robustness, auditability, and fairness, and has important consequences toward keeping the human in the loop in high levels of automation, especially in critical cases for decision making, where both (human and the machine) play important roles. Although the research community has given much attention to explainability of closed (or black) prediction boxes, there are tremendous needs for explainability of closed-box methods that support agents to act autonomously in the real world. Reinforcement learning methods, and especially their deep versions, are such closed-box methods. In this article, we aim to provide a review of state-of-the-art methods for explainable deep reinforcement learning methods, taking also into account the needs of human operators—that is, of those who make the actual and critical decisions in solving real-world problems. We provide a formal specification of the deep reinforcement learning explainability problems, and we identify the necessary components of a general explainable reinforcement learning framework. Based on these, we provide a comprehensive review of state-of-the-art methods, categorizing them into classes according to the paradigm they follow, the interpretable models they use, and the surface representation of explanations provided. The article concludes by identifying open questions and important challenges.
Funder
TAPAS
Towards an Automated and exPlainable Air traffic management (ATM) System
Publisher
Association for Computing Machinery (ACM)
Subject
General Computer Science,Theoretical Computer Science
Cited by
44 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献