Affiliation:
1. National Tsing Hua University, Hsinchu, Taiwan
2. Harvard University, Cambridge, Massachusetts
Abstract
Conventional reinforcement learning (RL) typically determines an appropriate primitive action at each timestep. However, by using a proper macro action, defined as a sequence of primitive actions, an RL agent is able to bypass intermediate states to a farther state and facilitate its learning procedure. The problem we would like to investigate is what associated beneficial properties that macro actions may possess. In this article, we unveil the properties of
reusability
and
transferability
of macro actions. The first property,
reusability
, means that a macro action derived along with one RL method can be reused by another RL method for training, while the second one,
transferability
, indicates that a macro action can be utilized for training agents in similar environments with different reward settings. In our experiments, we first derive macro actions along with RL methods. We then provide a set of analyses to reveal the properties of
reusability
and
transferability
of the derived macro actions.
Funder
Ministry of Science and Technology
Publisher
Association for Computing Machinery (ACM)
Reference35 articles.
1. M. Asai and A. Fukunaga. 2015. Solving large-scale planning problems by decomposition and macro generation. In Proceedings of the International Conference on Automated Planning and Scheduling (ICAPS). 16–24.
2. Unifying count-based exploration and intrinsic motivation;Bellemare Marc;Advances in Neural Information Processing Systems,2016
3. The arcade learning environment: An evaluation platform for general agents;Bellemare M. G.;J. Artificial Intelligence Research (JAIR),2013
4. Macro-FF: Improving AI planning with automatically learned macro-operators;Botea A.;J. Artificial Intelligence Research (JAIR),2005
5. Large-scale study of curiosity-driven learning;Burda Yuri;arXiv preprint arXiv:1808.04355,2018