Affiliation:
1. Carnegie Mellon University
Abstract
We present a novel method to model and synthesize variation in motion data. Given a few examples of a particular type of motion as input, we learn a generative model that is able to synthesize a family of spatial and temporal variants that are statistically similar to the input examples. The new variants retain the features of the original examples, but are
not exact copies
of them. We learn a Dynamic Bayesian Network model from the input examples that enables us to capture properties of conditional independence in the data, and model it using a multivariate probability distribution. We present results for a variety of human motion, and 2D handwritten characters. We perform a user study to show that our new variants are less repetitive than typical game and crowd simulation approaches of re-playing a small number of existing motion clips. Our technique can synthesize new variants efficiently and has a small memory requirement.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design
Cited by
54 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. DanceCraft: A Music-Reactive Real-time Dance Improv System;Proceedings of the 9th International Conference on Movement and Computing;2024-05-30
2. Cross-Camera Human Motion Transfer by Time Series Analysis;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
3. Contrastive disentanglement for self-supervised motion style transfer;Multimedia Tools and Applications;2024-01-30
4. MNET++: Music-Driven Pluralistic Dancing Toward Multiple Dance Genre Synthesis;IEEE Transactions on Pattern Analysis and Machine Intelligence;2023-12
5. Research on 3D Animation Simulation Based on Machine Learning;Proceedings of the 2023 3rd Guangdong-Hong Kong-Macao Greater Bay Area Artificial Intelligence and Big Data Forum;2023-09-22