Abstract
Abstract
There is an implicit assumption in machine learning techniques that each new task has no relation to the tasks previously learned. Therefore, tasks are often addressed independently. However, in some domains, particularly reinforcement learning (RL), this assumption is often incorrect because tasks in the same or similar domain tend to be related. In other words, even though tasks are quite different in their specifics, they may have general similarities, such as shared skills, making them related. In this paper, a novel domain adaptation-based method using adversarial networks is proposed to do transfer learning in RL problems. Our proposed method incorporates skills previously learned from source task to speed up learning on a new target task by providing generalization not only within a task but also across different, but related tasks. The experimental results indicate the effectiveness of our method in dealing with RL problems.
Publisher
Cambridge University Press (CUP)
Subject
Artificial Intelligence,Software
Reference44 articles.
1. Transfer learning via inter-task mappings for temporal difference learning;Taylor;Journal of Machine Learning Research,2007
2. An Introduction to Intertask Transfer for Reinforcement Learning
3. Puterman, M. L. 2014. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley & Sons.
4. Automatic skill acquisition in reinforcement learning using graph centrality measures
5. Liu, M.-Y. & Tuzel, O. 2016. Coupled generative adversarial networks. In Advances in Neural Information Processing Systems, 469–477.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献