Affiliation:
1. Department of Computer Engineering, Middle East Technical University, Ankara, Turkey
2. RF and Simulation Systems Directorate, STM Defense Technologies Engineering and Trade Inc., Ankara, Turkey
Abstract
Various methods have been proposed in the literature for identifying subgoals in discrete reinforcement learning (RL) tasks. Once subgoals are discovered, task decomposition methods can be employed to improve the learning performance of agents. In this study, we classify prominent subgoal identification methods for discrete RL tasks in the literature into the following three categories: graph-based, statistics-based, and multi-instance learning (MIL)-based. As contributions, firstly, we introduce a new MIL-based subgoal identification algorithm called EMDD-RL and experimentally compare it with a previous MIL-based method. The previous approach adapts MIL’s Diverse Density (DD) algorithm, whereas our method considers Expected-Maximization Diverse Density (EMDD). The advantage of EMDD over DD is that it can yield more accurate results with less computation demand thanks to the expectation-maximization algorithm. EMDD-RL modifies some of the algorithmic steps of EMDD to identify subgoals in discrete RL problems. Secondly, we evaluate the methods in several RL tasks for the hyperparameter tuning overhead they incur. Thirdly, we propose a new RL problem called key-room and compare the methods for their subgoal identification performances in this new task. Experiment results show that MIL-based subgoal identification methods could be preferred to the algorithms of the other two categories in practice.
Publisher
Association for Computing Machinery (ACM)
Subject
Software,Computer Science (miscellaneous),Control and Systems Engineering
Reference28 articles.
1. Using chains of bottleneck transitions to decompose and solve reinforcement learning tasks with hidden states
2. Akhil Bagaria and George Konidaris. 2019. Option discovery using deep skill chaining. In International Conference on Learning Representations.
3. Using relative novelty to identify useful temporal abstractions in reinforcement learning
4. Identifying useful subgoals in reinforcement learning by local graph partitioning
5. Michael Dann, Fabio Zambetta, and John Thangarajah. 2019. Deriving subgoals autonomously to accelerate learning in sparse reward domains. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. 881–889.