Abstract
AbstractDue to the increasing demand for virtual avatars, there has been a recent growth in the research and development of frameworks for realistic digital humans, which create a demand for realistic and adaptable facial motion capture systems. Most frameworks belong to private companies or represent high investments, which is why the creation of democratized solutions is relevant for the growth of digital human content creation. This research work proposes a facial motion capture framework for digital humans with the use of machine learning for facial codification intensity regression. The main focus is to use coded face movement intensities to generate realistic expressions on a digital human. The ablation studies performed on the regression models show that Neural Networks, using Histogram of Oriented Gradients as features, and with person-specific normalization, present overall better performance against other methods in the literature. With an RMSE of 0.052, the proposed framework offers reliable results that can be rendered in the face of a MetaHuman.
Funder
Epic MegaGrants
Consejo Nacional de Ciencia y Tecnología
Publisher
Springer Science and Business Media LLC