Affiliation:
1. The University of Edinburgh
2. Electronic Arts
3. The University of Hong Kong and The University of Edinburgh
Abstract
Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The framework can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions such as clinching and carrying by learning the spatial joint relationships. To achieve this goal, we adopt a modular framework which is composed of the motion generator and a set of different control modules. The motion generator functions as a motion manifold that projects novel mixed/edited trajectories to natural full-body motions, and synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. Our modular framework provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of control modules for different motion tasks and behaviors. Our system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.
Funder
The University of Hong Kong
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Cited by
42 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
2. LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
3. Denoising Diffusion Probabilistic Models for Action-Conditioned 3D Motion Generation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Generating Continual Human Motion in Diverse 3D Scenes;2024 International Conference on 3D Vision (3DV);2024-03-18
5. Crowd-sourced Evaluation of Combat Animations;2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR);2024-01-17