Hybrid Character Generation via Shape Control Using Explicit Facial Features
-
Published:2023-05-26
Issue:11
Volume:11
Page:2463
-
ISSN:2227-7390
-
Container-title:Mathematics
-
language:en
-
Short-container-title:Mathematics
Author:
Lee Jeongin1, Yeom Jihyeon1, Yang Heekyung2, Min Kyungha1
Affiliation:
1. Department of Computer Science, Sangmyung University, 20, Hongjimoon 2 gil, Jongro-gu, Seoul 03016, Republic of Korea 2. Division of SW Convergence, Sangmyung University, 20, Hongjimoon 2 gil, Jongro-gu, Seoul 03016, Republic of Korea
Abstract
We present a hybrid approach for generating a character by independently controlling its shape and texture using an input face and a styled face. To effectively produce the shape of a character, we propose an anthropometry-based approach that defines and extracts 37 explicit facial features. The shape of a character’s face is generated by extracting these explicit facial features from both faces and matching their corresponding features, which enables the synthesis of the shape with different poses and scales. We control this shape generation process by manipulating the features of the input and styled faces. For the style of the character, we devise a warping field-based style transfer method using the features of the character’s face. This method allows an effective application of style while maintaining the character’s shape and minimizing artifacts. Our approach yields visually pleasing results from various combinations of input and styled faces.
Funder
Sangmyung University
Subject
General Mathematics,Engineering (miscellaneous),Computer Science (miscellaneous)
Reference36 articles.
1. Gatys, L.A., Ecker, A.S., and Bethge, M. (2016, January 27–30). Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA. 2. Zhu, J.Y., Park, T., Isola, P., and Efros, A. (2017, January 22–29). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy. 3. Kim, J., Kim, M., Kang, H., and Lee, K. (2020, January 26–30). U-gat-it: Unsupervised generative attentional networks with adaptive layer-instance normalization for image-to-image translation. Proceedings of the 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia. 4. Karras, T., Laine, S., and Aila, T. (2019, January 16–17). A style-based generator architecture for generative adversarial networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA. 5. Pinkney, J.N., and Adler, D. (2020). Resolution dependent GAN interpolation for controllable image synthesis between domains. arXiv.
|
|