Affiliation:
1. Univ Lyon, UCBL, CNRS, INSA Lyon, LIRIS, UMR5205, 69622 Villeurbanne, France
Abstract
The reinforcement learning (RL) research area is very active, with an important number of new contributions, especially considering the emergent field of deep RL (DRL). However, a number of scientific and technical challenges still need to be resolved, among which we acknowledge the ability to abstract actions or the difficulty to explore the environment in sparse-reward settings which can be addressed by intrinsic motivation (IM). We propose to survey these research works through a new taxonomy based on information theory: we computationally revisit the notions of surprise, novelty, and skill-learning. This allows us to identify advantages and disadvantages of methods and exhibit current outlooks of research. Our analysis suggests that novelty and surprise can assist the building of a hierarchy of transferable skills which abstracts dynamics and makes the exploration process more robust.
Subject
General Physics and Astronomy
Reference190 articles.
1. Sutton, R.S., and Barto, A.G. (1998). Reinforcement Learning: An Introduction, MIT Press.
2. Bellemare, M.G., Naddaf, Y., Veness, J., and Bowling, M. (2015). Proceedings of the IJCAI, AAAI Press.
3. Human-level control through deep reinforcement learning;Mnih;Nature,2015
4. An Introduction to Deep Reinforcement Learning;Henderson;Found. Trends Mach. Learn.,2018
5. Todorov, E., Erez, T., and Tassa, Y. (2012, January 7–12). Mujoco: A physics engine for model-based control. Proceedings of the 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura-Algarve, Portugal.
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献