Author:
Terashima Kento, ,Takano Hirotaka,Murata Junichi
Abstract
Reinforcement learning is applicable to complex or unknown problems because the solution search process is done by trial-and-error. However, the calculation time for the trial-and-error search becomes larger as the scale of the problem increases. Therefore, in order to decrease calculation time, some methods have been proposed using the prior information on the problem. This paper improves a previously proposed method utilizing options as prior information. In order to increase the learning speed even with wrong options, methods for option correction by forgetting the policy and extending initiation sets are proposed.
Publisher
Fuji Technology Press Ltd.
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Human-Computer Interaction
Reference11 articles.
1. R. S. Sutton and A. G. Barto, “Reinforcement Learning: An Introduction,” MIT Press, 1998.
2. A. McGovern, R. S. Sutton, and A. H. Fagg, “Roles of Macro-Actions in Accelerating Reinforcement Learning,” the Proc. of the 1997 Grace Hopper Celebration of Women in Computing, pp. 1-6, 1997.
3. R. S. Sutton, D. Precup, and S. Singh, “Between MDPs and semi-MDPs: A framework for temporal abstraction in reinforcement learning,” Artificial Intelligence, Vol.112, pp. 181-211, 1999.
4. M. Shokri, “Knowledge of opposite actions for reinforcement learning,” Applied Soft Computing, Vol.11, pp. 4097-4109, 2011.
5. R. S. Sutton, D. Precup, and S. Singh, “Intra-Option Learning about Temporally Abstract Actions,” In Proc. of the 15th Int. Conf. on Machine Learning, pp. 556-564, 1998.