Abstract
This thesis explores how reinforcement learning (RL) agents can provide explanations for their actions and behaviours. As humans, we build causal models to encode cause-effect relations of events and use these to explain why events happen. Taking inspiration from cognitive psychology and social science literature, I build causal explanation models and explanation dialogue models for RL agents. By mimicking human-like explanation models, these agents can provide explanations that are natural and intuitive to humans.
Publisher
Association for the Advancement of Artificial Intelligence (AAAI)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Towards an Automatic Ensemble Methodology for Explainable Reinforcement Learning;2024 IEEE 14th Annual Computing and Communication Workshop and Conference (CCWC);2024-01-08