Affiliation:
1. Zhejiang University and Netease Fuxi AI Lab
2. Zhejiang University
Abstract
We present a deep learning-based framework to synthesize motion in-betweening in a two-stage manner. Given some context frames and a target frame, the system can generate plausible transitions with variable lengths in a non-autoregressive fashion. The framework consists of two Transformer Encoder-based networks operating in two stages: in the first stage a Context Transformer is designed to generate rough transitions based on the context and in the second stage a Detail Transformer is employed to refine motion details. Compared to existing Transformer-based methods which either use a complete Transformer Encoder-Decoder architecture or additional 1D convolutions to generate motion transitions, our framework achieves superior performance with less trainable parameters by only leveraging the Transformer Encoder and masked self-attention mechanism. To enhance the generalization of our transformer-based framework, we further introduce Keyframe Positional Encoding and Learned Relative Positional Encoding to make our method robust in synthesizing longer transitions exceeding the maximum transition length during training. Our framework is also artist-friendly by supporting full and partial pose constraints within the transition, giving artists fine control over the synthesized results. We benchmark our framework on the LAFAN1 dataset, and experiments show that our method outperforms the current state-of-the-art methods at a large margin (an average of 16% for normal-length sequences and 55% for excessive-length sequences). Our method trains faster than the RNN-based method and achieves a four-time speedup during inference. We implement our framework into a production-ready tool inside an animation authoring software and conduct a pilot study to validate the practical value of our method.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Reference80 articles.
1. A Spatio-temporal Transformer for 3D Human Motion Prediction
2. Interactive motion generation from examples
3. Autodesk. 2022. Maya. https://www.autodesk.com/products/maya/overview Autodesk. 2022. Maya. https://www.autodesk.com/products/maya/overview
4. Jimmy Lei Ba , Jamie Ryan Kiros, and Geoffrey E Hinton . 2016 . Layer normalization. arXiv preprint arXiv:1607.06450 (2016). Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450 (2016).
5. HP-GAN: Probabilistic 3D Human Motion Prediction via GAN
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Iterative Motion Editing with Natural Language;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
2. Flexible Motion In-betweening with Diffusion Models;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13
3. Orientation-aware leg movement learning for action-driven human motion prediction;Pattern Recognition;2024-06
4. DanceCraft: A Music-Reactive Real-time Dance Improv System;Proceedings of the 9th International Conference on Movement and Computing;2024-05-30
5. Neural Motion Graph;SIGGRAPH Asia 2023 Conference Papers;2023-12-10