Affiliation:
1. Department of Robotics, Graduate School of Engineering, Tohoku University, Sendai 980-8579, Japan
2. Research Fellow of Japan Society for the Promotion of Science, Tokyo 102-0083, Japan
Abstract
Policy learning enables agents to learn how to map states to actions, thus enabling adaptive and flexible behavioral generation in complex environments. Policy learning methods are fundamental to reinforcement learning techniques. However, as problem complexity and the requirement for motion flexibility increase, traditional methods that rely on manual design have revealed their limitations. Conversely, data-driven policy learning focuses on extracting strategies from biological behavioral data and aims to replicate these behaviors in real-world environments. This approach enhances the adaptability of agents to dynamic substrates. Furthermore, this approach has been extensively applied in autonomous driving, robot control, and interpretation of biological behavior. In this review, we survey developments in data-driven policy-learning algorithms over the past decade. We categorized them into the following three types according to the purpose of the method: (1) imitation learning (IL), (2) inverse reinforcement learning (IRL), and (3) causal policy learning (CPL). We describe the classification principles, methodologies, progress, and applications of each category in detail. In addition, we discuss the distinct features and practical applications of these methods. Finally, we explore the challenges these methods face and prospective directions for future research.
Reference125 articles.
1. Toward a Science of Computational Ethology;Anderson;Neuron,2014
2. Zhou, Y., Fu, R., Wang, C., and Zhang, R. (2020). Modeling Car-Following Behaviors and Driving Styles with Generative Adversarial Imitation Learning. Sensors, 20.
3. Fahad, M., Chen, Z., and Guo, Y. (2018, January 1–5). Learning how pedestrians navigate: A deep inverse reinforcement learning approach. Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain.
4. Peng, X.B., Coumans, E., Zhang, T., Lee, T.W., Tan, J., and Levine, S. (2020). Learning agile robotic locomotion skills by imitating animals. arXiv.
5. Learning strategies in table tennis using inverse reinforcement learning;Muelling;Biol. Cybern.,2014