Multi-Attention Infused Integrated Facial Attribute Editing Model: Enhancing the Robustness of Facial Attribute Manipulation
-
Published:2023-09-30
Issue:19
Volume:12
Page:4111
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Lin Zhijie1ORCID, Xu Wangjun1, Ma Xiaolong2, Xu Caie1, Xiao Han3
Affiliation:
1. School of Information and Electronic Engineering, Zhejiang University of Science and Technology, Hangzhou 310023, China 2. School of Management, Huzhou University, Huzhou 313000, China 3. College of Science, University of Arizona, Tucson, AZ 85719, USA
Abstract
Facial attribute editing refers to the task of modifying facial images by altering specific target facial attributes. Existing approaches typically rely on the combination of generative adversarial networks and encoder–decoder architectures to tackle this problem. However, current methods may exhibit limited accuracy when dealing with certain attributes. The primary objective of this research is to enhance facial image modification based on user-specified target facial attributes, such as hair color, beard removal, or gender transformation. During the editing process, it is crucial to selectively modify only the regions relevant to the target attributes while preserving the details of other unrelated facial attributes. This ensures that the editing results appear more natural and realistic. This study introduces a novel approach called MAGAN (Combining GRU Structure and Additive Attention with AGU—Adaptive Gated Units). Moreover, a discriminative attention mechanism is introduced to automatically identify key regions in the input images that are relevant to facial attributes. This mechanism concentrates attention on these regions, enhancing the model’s ability to accurately capture and analyze subtle facial attribute features. The method incorporates external attention within the convolutional layers of the encoder–decoder architecture, facilitating the modeling of linear complexity across image regions and implicitly considering correlations among all data samples. By employing discriminative attention in the discriminator, the model achieves more precise attribute editing. To evaluate the effectiveness of MAGAN, experiments were conducted on the CelebA dataset. The average precision of facial attribute generation in images edited by our model stands at 91.83%. PSNR and SSIM for reconstructed images are 32.52 and 0.957, respectively. In comparison with existing methodologies (AttGAN, STGAN, MUGAN), noteworthy enhancements have been achieved in the domain of facial attribute manipulation.
Funder
Natural Science Foundation of Zhejiang Province
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference37 articles.
1. Face beautification: Beyond makeup transfer;Liu;Front. Comput. Sci.,2022 2. Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., and Alahi, A. (2018, January 23). Social gan: Socially acceptable trajectories with generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 3. Generative Adversarial Networks;Goodfellow;Commun. ACM,2020 4. Auto-Encoding Variational Bayes;Kingma;Stat,2014 5. Ho, J., Jain, A., and Abbeel, P. (2020). Denoising Diffusion Probabilistic Models. arXiv.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|