Affiliation:
1. Trinity College Dublin, Dublin, Ireland
Abstract
While AI algorithms have shown remarkable success in various fields, their lack of transparency hinders their application to real-life tasks. Although explanations targeted at non-experts are necessary for user trust and human-AI collaboration, the majority of explanation methods for AI are focused on developers and expert users. Counterfactual explanations are local explanations that offer users advice on what can be changed in the input for the output of the black-box model to change. Counterfactuals are user-friendly and provide actionable advice for achieving the desired output from the AI system. While extensively researched in supervised learning, there are few methods applying them to reinforcement learning (RL). In this work, we explore the reasons for the underrepresentation of a powerful explanation method in RL. We start by reviewing the current work in counterfactual explanations in supervised learning. Additionally, we explore the differences between counterfactual explanations in supervised learning and RL and identify the main challenges that prevent the adoption of methods from supervised in reinforcement learning. Finally, we redefine counterfactuals for RL and propose research directions for implementing counterfactuals in RL.
Funder
Science Foundation Ireland
SFI Frontiers for the Future
Publisher
Association for Computing Machinery (ACM)
Reference102 articles.
1. State-of-the-art in artificial neural network applications: A survey
2. Dan Amir and Ofra Amir. 2018. Highlights: Summarizing agent behavior to people. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. 1168–1176.
3. Concrete problems in AI safety;Amodei Dario;arXiv preprint arXiv:1606.06565,2016
4. Survey of Deep Reinforcement Learning for Motion Planning of Autonomous Vehicles
5. Deep Reinforcement Learning: A Brief Survey
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献