Author:
Ye Yuanzhi,He Xiangzhen,Zhang Yihao,Hu Yerong,Zeng Jia
Abstract
Abstract
In the era of information technology, human-computer interaction tendings to be multi-modal, and lip animation simulation driven by pronunciation actions has attracted more and more attention. Starting from Mandarin Chinese, this paper establishins a model of the trajectory of the lips during the pronunciation process. The model is based on a motion capture technology, proposes a database collection a processing method, extracts acoustic parameters and text information to establish an HMM prediction model of pronunciation action parameters, and synthesizes the pronunciation movement trajectory, and the average percentage error with the real trajectory is less than 3.42%. The results show that the predicted pronunciation action parameters can effectively synthesize the lip pronunciation movement trajectory, and the use of lip animation can enhance language understanding.
Subject
General Physics and Astronomy
Reference18 articles.
1. A Preliminary Study on the Viseme System of Mandarin Chinese[D];Wang,2000
2. Research on Visualized Cooperative Pronunciation Synthesis of Uyghur Language[D];Wu,2014
3. 3D visualization of pronunciation[D];Li,2016
4. Research on text-driven 3D mouth animation synthesis based on Chinese collaborative pronunciation model[D];Wang,2014
5. X-ray microbeam method for the measurement of articulatory dynam-ics: Technique and results [J];Kiritani;Speech Communication,1986