Abstract
This paper presents a novel deep learning-based framework for translating a motion into various styles within multiple domains. Our framework is a single set of generative adversarial networks that learns stylistic features from a collection of unpaired motion clips with style labels to support mapping between multiple style domains. We construct a spatio-temporal graph to model a motion sequence and employ the spatial-temporal graph convolution networks (ST-GCN) to extract stylistic properties along spatial and temporal dimensions. Through spatial-temporal modeling, our framework shows improved style translation results between significantly different actions and on a long motion sequence containing multiple actions. In addition, we first develop a mapping network for motion stylization that maps a random noise to style, which allows for generating diverse stylization results without using reference motions. Through various experiments, we demonstrate the ability of our method to generate improved results in terms of visual quality, stylistic diversity, and content preservation.
Funder
National Research Foundation, Korea
KEIT, Korea
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Computer Science Applications
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Denoising Diffusion Probabilistic Models for Action-Conditioned 3D Motion Generation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
2. Spatially-Adaptive Instance Normalization for Generation of More Style-Recognizable Motions;2024 IEEE 7th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC);2024-03-15
3. Contrastive disentanglement for self-supervised motion style transfer;Multimedia Tools and Applications;2024-01-30
4. MOCHA: Real-Time Motion Characterization via Context Matching;SIGGRAPH Asia 2023 Conference Papers;2023-12-10
5. An Implicit Physical Face Model Driven by Expression and Style;SIGGRAPH Asia 2023 Conference Papers;2023-12-10