Affiliation:
1. The Department of Mechanical Engineering, The University of Alabama, Tuscaloosa, AL 35401, USA
2. The Department of Computer Science, The University of Alabama, Tuscaloosa, AL 35401, USA
Abstract
Locomotor impairment is a highly prevalent and significant source of disability and significantly impacts the quality of life of a large portion of the population. Despite decades of research on human locomotion, challenges remain in simulating human movement to study the features of musculoskeletal drivers and clinical conditions. Most recent efforts to utilize reinforcement learning (RL) techniques are promising in the simulation of human locomotion and reveal musculoskeletal drives. However, these simulations often fail to mimic natural human locomotion because most reinforcement strategies have yet to consider any reference data regarding human movement. To address these challenges, in this study, we designed a reward function based on the trajectory optimization rewards (TOR) and bio-inspired rewards, which includes the rewards obtained from reference motion data captured by a single Inertial Moment Unit (IMU) sensor. The sensor was equipped on the participants’ pelvis to capture reference motion data. We also adapted the reward function by leveraging previous research on walking simulations for TOR. The experimental results showed that the simulated agents with the modified reward function performed better in mimicking the collected IMU data from participants, which means that the simulated human locomotion was more realistic. As a bio-inspired defined cost, IMU data enhanced the agent’s capacity to converge during the training process. As a result, the models’ convergence was faster than those developed without reference motion data. Consequently, human locomotion can be simulated more quickly and in a broader range of environments, with a better simulation performance.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference68 articles.
1. Kidziński, Ł., Mohanty, S.P., Ong, C.F., Hicks, J.L., Carroll, S.F., Levine, S., Salathé, M., and Delp, S.L. (2018). The NIPS’17 Competition: Building Intelligent Systems, Springer.
2. Gentile, C., Cordella, F., and Zollo, L. (2022). Hierarchical Human-Inspired Control Strategies for Prosthetic Hands. Sensors, 22.
3. A deep learning framework for neuroscience;Richards;Nat. Neurosci.,2019
4. Deep reinforcement learning for modeling human locomotion control in neuromechanical simulation;Song;J. Neuroeng. Rehabil.,2021
5. Muscle contributions to upper-extremity movement and work from a musculoskeletal model of the human shoulder;Seth;Front. Neurorobot.,2019
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献