Affiliation:
1. School of Instrument Science and Engineering Southeast University Nanjing China
Abstract
AbstractThe development of the systems capable of synthesizing natural and life‐like motions for virtual characters has long been a central focus in computer animation. It needs to generate high‐quality motions for characters and provide users with a convenient and flexible interface for guiding character motions. In this work, we propose a language‐directed virtual human motion generation approach based on musculoskeletal models to achieve interactive and higher‐fidelity virtual human motion, which lays the foundation for the development of language‐directed controllers in physics‐based character animation. First, we construct a simplified model of musculoskeletal dynamics for the virtual character. Subsequently, we propose a hierarchical control framework consisting of a trajectory tracking layer and a muscle control layer, obtaining the optimal control policy for imitating the reference motions through the training. We design a multi‐policy aggregation controller based on large language models, which selects the motion policy with the highest similarity to user text commands from the action‐caption data pool, facilitating natural language‐based control of virtual character motions. Experimental results demonstrate that the proposed approach not only generates high‐quality motions highly resembling reference motions but also enables users to effectively guide virtual characters to perform various motions via natural language instructions.
Reference37 articles.
1. Language models are few‐shot learners;Brown T;Adv Neural Inf Process Syst,2020
2. RadfordA KimJW HallacyC et al.Learning transferable visual models from natural language supervision. International conference on machine learning. PMLR; 2021. p. 8748–8763.
3. RameshA DhariwalP NicholA ChuC ChenM.Hierarchical text‐conditional image generation with clip latents.arXiv. arXiv preprint arXiv:2204.06125 2022.2022.
4. TanF FengS OrdonezV.Text2scene: generating compositional scenes from textual descriptions. Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. Piscataway NJ: IEEE; 2019. p. 6710–6719.
5. DeepLoco