Author:
Su Binbin,Gutierrez-Farewik Elena M.
Abstract
IntroductionRecent advancements in reinforcement learning algorithms have accelerated the development of control models with high-dimensional inputs and outputs that can reproduce human movement. However, the produced motion tends to be less human-like if algorithms do not involve a biomechanical human model that accounts for skeletal and muscle-tendon properties and geometry. In this study, we have integrated a reinforcement learning algorithm and a musculoskeletal model including trunk, pelvis, and leg segments to develop control modes that drive the model to walk.MethodsWe simulated human walking first without imposing target walking speed, in which the model was allowed to settle on a stable walking speed itself, which was 1.45 m/s. A range of other speeds were imposed for the simulation based on the previous self-developed walking speed. All simulations were generated by solving the Markov decision process problem with covariance matrix adaptation evolution strategy, without any reference motion data.ResultsSimulated hip and knee kinematics agreed well with those in experimental observations, but ankle kinematics were less well-predicted.DiscussionWe finally demonstrated that our reinforcement learning framework also has the potential to model and predict pathological gait that can result from muscle weakness.
Subject
Artificial Intelligence,Biomedical Engineering
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献