Adaptive Gait Acquisition through Learning Dynamic Stimulus Instinct of Bipedal Robot
-
Published:2024-05-22
Issue:6
Volume:9
Page:310
-
ISSN:2313-7673
-
Container-title:Biomimetics
-
language:en
-
Short-container-title:Biomimetics
Author:
Zhang Yuanxi1, Chen Xuechao12ORCID, Meng Fei12ORCID, Yu Zhangguo12ORCID, Du Yidong1, Zhou Zishun1ORCID, Gao Junyao12
Affiliation:
1. School of Mechatronical Engineering, Beijing Institute of Technology, Beijing 100081, China 2. Key Laboratory of Biomimetic Robots and Systems, Ministry of Education, Beijing 100081, China
Abstract
Standard alternating leg motions serve as the foundation for simple bipedal gaits, and the effectiveness of the fixed stimulus signal has been proved in recent studies. However, in order to address perturbations and imbalances, robots require more dynamic gaits. In this paper, we introduce dynamic stimulus signals together with a bipedal locomotion policy into reinforcement learning (RL). Through the learned stimulus frequency policy, we induce the bipedal robot to obtain both three-dimensional (3D) locomotion and an adaptive gait under disturbance without relying on an explicit and model-based gait in both the training stage and deployment. In addition, a set of specialized reward functions focusing on reliable frequency reflections is used in our framework to ensure correspondence between locomotion features and the dynamic stimulus. Moreover, we demonstrate efficient sim-to-real transfer, making a bipedal robot called BITeno achieve robust locomotion and disturbance resistance, even in extreme situations of foot sliding in the real world. In detail, under a sudden change in torso velocity of −1.2 m/s in 0.65 s, the recovery time is within 1.5–2.0 s.
Funder
National Natural Science Foundation of China “111” Project
Reference35 articles.
1. Trajectory-free dynamic locomotion using key trend states for biped robots with point feet;Han;Inf. Sci.,2023 2. Dong, C., Chen, X., Yu, Z., Liu, H., Meng, F., and Huang, Q. (2023). Swift Running Robot Leg: Mechanism Design and Motion-Guided Optimization. IEEE/ASME Trans. Mechatron., 1–12. 3. Goswami, A., and Vadakkepat, P. (2019). Humanoid Robotics: A Reference, Springer. 4. Reinforcement Learning: An Introduction;Sutton;IEEE Trans. Neural Netw.,1998 5. Gong, Y., Hartley, R., Da, X., Hereid, A., Harib, O., Huang, J.K., and Grizzle, J. (2019, January 10–12). Feedback control of a cassie bipedal robot: Walking, standing, and riding a segway. Proceedings of the 2019 American Control Conference (ACC), Philadelphia, PA, USA.
|
|