Grounded action transformation for sim-to-real reinforcement learning
-
Published:2021-05-13
Issue:9
Volume:110
Page:2469-2499
-
ISSN:0885-6125
-
Container-title:Machine Learning
-
language:en
-
Short-container-title:Mach Learn
Author:
Hanna Josiah P.ORCID, Desai Siddharth, Karnan Haresh, Warnell Garrett, Stone Peter
Abstract
AbstractReinforcement learning in simulation is a promising alternative to the prohibitive sample cost of reinforcement learning in the physical world. Unfortunately, policies learned in simulation often perform worse than hand-coded policies when applied on the target, physical system. Grounded simulation learning (gsl) is a general framework that promises to address this issue by altering the simulator to better match the real world (Farchy et al. 2013 in Proceedings of the 12th international conference on autonomous agents and multiagent systems (AAMAS)). This article introduces a new algorithm for gsl—Grounded Action Transformation (GAT)—and applies it to learning control policies for a humanoid robot. We evaluate our algorithm in controlled experiments where we show it to allow policies learned in simulation to transfer to the real world. We then apply our algorithm to learning a fast bipedal walk on a humanoid robot and demonstrate a 43.27% improvement in forward walk velocity compared to a state-of-the art hand-coded walk. This striking empirical success notwithstanding, further empirical analysis shows that gat may struggle when the real world has stochastic state transitions. To address this limitation we generalize gat to the stochasticgat (sgat) algorithm and empirically show that sgat leads to successful real world transfer in situations where gat may fail to find a good policy. Our results contribute to a deeper understanding of grounded simulation learning and demonstrate its effectiveness for applying reinforcement learning to learn robot control policies entirely in simulation.
Funder
National Science Foundation Office of Naval Research Future of Life Institute Army Research Laboratory Defense Advanced Research Projects Agency Lockheed Martin General Motors Bosch
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Software
Reference54 articles.
1. Abbeel, P., Quigley, M., & Ng, A. Y. (2006). Using inaccurate models in reinforcement learning. In Proceedings of the 23rd international conference on machine learning (ICML). http://dl.acm.org/citation.cfm?id=1143845. 2. Ashar, J., Ashmore, J., Hall, B., Harris, S., Hengst, B., Liu, R., Mei, Z., Pagnucco, M., Roy, R., Sammut, C., Sushkov, O., Teh, B., & Tsekouras, L. (2015). RoboCup SPL 2014 champion team paper. In R. A. C. Bianchi, H. L. Akin, S. Ramamoorthy, K. Sugiura (Eds.) RoboCup 2014: Robot World Cup XVIII, lecture notes in artificial intelligence (Vol. 8992, pp. 70–81). Springer International Publishing. 3. Boeing, A., & Bräunl, T. (2012). Leveraging multiple simulators for crossing the reality gap. In Proceedings of the 12th international conference on control automation robotics & vision (ICARCV) (pp. 1113–1119). IEEE. 4. Bousmalis, K., Irpan, A., Wohlhart, P., Bai, Y., Kelcey, M., Kalakrishnan, M., Downs, L., Ibarz, J., Pastor, P., Konolige, K., Levine, S., & Vanhoucke, V. (2018). Using simulation and domain adaptation to improve efficiency of deep robotic grasping. In Proceedings of the IEEE international conference on robotics and automation (ICRA). 5. Chebotar, Y., Handa, A., Makoviychuk, V., Macklin, M., Issac, J., Ratliff, N., & Fox, D. (2019). Closing the sim-to-real loop: Adapting simulation randomization with real world experience. In Proceedings of the IEEE international conference on robotics and automation (ICRA).
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|