Neural animation layering for synthesizing martial arts movements

Author:

Starke Sebastian1,Zhao Yiwei2,Zinno Fabio2,Komura Taku3

Affiliation:

1. The University of Edinburgh

2. Electronic Arts

3. The University of Hong Kong and The University of Edinburgh

Abstract

Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. In this paper, we propose a deep learning framework to produce a large variety of martial arts movements in a controllable manner from raw motion capture data. Our method imitates animation layering using neural networks with the aim to overcome typical challenges when mixing, blending and editing movements from unaligned motion sources. The framework can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions such as clinching and carrying by learning the spatial joint relationships. To achieve this goal, we adopt a modular framework which is composed of the motion generator and a set of different control modules. The motion generator functions as a motion manifold that projects novel mixed/edited trajectories to natural full-body motions, and synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. Our modular framework provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of control modules for different motion tasks and behaviors. Our system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.

Funder

The University of Hong Kong

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Cited by 42 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. TEDi: Temporally-Entangled Diffusion for Long-Term Motion Synthesis;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13

2. LGTM: Local-to-Global Text-Driven Human Motion Diffusion Model;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13

3. Denoising Diffusion Probabilistic Models for Action-Conditioned 3D Motion Generation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14

4. Generating Continual Human Motion in Diverse 3D Scenes;2024 International Conference on 3D Vision (3DV);2024-03-18

5. Crowd-sourced Evaluation of Combat Animations;2024 IEEE International Conference on Artificial Intelligence and eXtended and Virtual Reality (AIxVR);2024-01-17

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3