A Lightweight Monocular 3D Face Reconstruction Method Based on Improved 3D Morphing Models
Author:
You Xingyi12ORCID, Wang Yue12ORCID, Zhao Xiaohu12ORCID
Affiliation:
1. National and Local Joint Engineering Laboratory of Internet Applied Technology on Mines, China University of Mining and Technology, Xuzhou 221008, China 2. School of Information and Control Engineering, China University of Mining and Technology, Xuzhou 221008, China
Abstract
In the past few years, 3D Morphing Model (3DMM)-based methods have achieved remarkable results in single-image 3D face reconstruction. However, high-fidelity 3D face texture generation has been successfully achieved with this method, which mostly uses the power of deep convolutional neural networks during the parameter fitting process, which leads to an increase in the number of network layers and computational burden of the network model and reduces the computational speed. Currently, existing methods increase computational speed by using lightweight networks for parameter fitting, but at the expense of reconstruction accuracy. In order to solve the above problems, we improved the 3D deformation model and proposed an efficient and lightweight network model: Mobile-FaceRNet. First, we combine depthwise separable convolution and multi-scale representation methods to fit the parameters of a 3D deformable model (3DMM); then, we introduce a residual attention module during network training to enhance the network’s attention to important features, guaranteeing high-fidelity facial texture reconstruction quality; and, finally, a new perceptual loss function is designed to better address smoothness and image similarity for the smoothing constraints. Experimental results show that the method proposed in this paper can not only achieve high-precision reconstruction under the premise of lightweight, but it is also more robust to influences such as attitude and occlusion.
Funder
Fundamental Research Funds for the Central Universities
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference57 articles.
1. Thies, J., Zollhofer, M., Stamminger, M., Theobalt, C., and Nießner, M. (2016, January 27–30). Face2face: Real-time face capture and reenactment of rgb videos. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 2. Chen, S.Y., Lai, Y.K., Xia, S., Rosin, P., and Gao, L. (2022). 3D face reconstruction and gaze tracking in the HMD for virtual interaction. IEEE Trans. Multimed. 3. Lattas, A., Moschoglou, S., Gecer, B., Ploumpis, S., Triantafyllou, V., Ghosh, A., and Zafeiriou, S. (2020, January 13–19). AvatarMe: Realistically Renderable 3D Facial Reconstruction “in-the-wild”. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA. 4. 3d face reconstruction from a single image assisted by 2d face images in the wild;Tu;IEEE Trans. Multimed.,2020 5. Multimodal 2D+ 3D facial expression recognition with deep fusion convolutional neural network;Li;IEEE Trans. Multimed.,2017
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A Review of Research on 3D Face Reconstruction Methods;Proceedings of the 2024 9th International Conference on Intelligent Information Technology;2024-02-23
|
|