Author:
Jiang Hanzhen,Yan Yingdong
Abstract
Dance Coherent Action Generation is a popular research task in recent years to generate movements and actions for computer-generated characters in a simulated environment. It is sometimes referred to as "Motion Synthesis". Motion synthesis algorithms are used to generate physically believable, visually compelling, and contextually appropriate movement using motion sensors. The Dance Coherent Action Generation Model (DCAM) is a generative framework for producing aesthetically pleasing movements using deep learning from small amounts of data. By learning an internal representation of motion dynamics, DCAM can synthesize long sequences of movements in which coherent patterns can be created through latent space interpolation. This framework provides a mechanism for varying the amplitude of the generated motion, allowing further realistic thinking and expression. The proposed model obtained 93.79% accuracy, 93.79% precision, 97.75% recall and 92.92% F1 score. DCAM exploits the balance between imitation and creativity by enabling the production of novel outputs from limited input data and can be trained in an unsupervised manner or fine-tuned with sparse supervision. Furthermore, the framework is easily extended to handle multiple layers of abstraction and can be further personalized to a particular type of movement, enabling the generation of highly individualized outputs.
Publisher
Scalable Computing: Practice and Experience
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献