GFAM: A Gender-Preserving Face Aging Model for Age Imbalance Data
-
Published:2023-05-24
Issue:11
Volume:12
Page:2369
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Li Suli12, Lee Hyo Jong1ORCID
Affiliation:
1. Department of Computer Science and Engineering, Center for Advanced Image and Information Technology, Jeonbuk National University, Jeonju 54896, Republic of Korea 2. Department of Computer Science and Engineering, Cangzhou Normal University, Cangzhou 061000, China
Abstract
The objective of face aging is to generate facial images that present the effects of aging. The existing one-hot encoding method for aging and/or rejuvenation patterns overlooks the personalized patterns for different genders and races, causing errors such as a male beard appearing on an aged female face. A gender-preserving face aging model is proposed to address these issues, termed GFAM. GFAM employs a generative adversarial network and includes several subnetworks that simulate the aging process between two adjacent age groups to learn specific aging effects. Specifically, the proposed model introduces a gender classifier and gender loss function that uses gender information as a self-guiding mechanism for maintaining gender attributes. To maintain the identity information of synthetic faces, the proposed model also introduces an identity-preserving module. Additionally, age balance loss is used to mitigate the impact of imbalanced age distribution and enhance the accuracy of aging predictions. Moreover, we construct a dataset with balanced age distribution for the task of face age progression, referred to as Age_FR. This dataset is expected to facilitate current research efforts. Ablation studies have been conducted to extensively evaluate the performance improvements achieved by our method. We obtained relative improvements of 3.75% higher than the model without the gender preserving module. The experimental results provide evidence of the effectiveness of the proposed method, both through qualitative and quantitative analyses. Notably, the mean face verification accuracy for the age-progressed groups (0–20, 31–40, 41–50, and 51–60) was found to be 100%, 99.83%, 99.79%, and 99.11%, respectively, highlighting the robustness of our approach across various age ranges.
Funder
Joint Demand Technology R&D of Regional SMEs funded by the Korea Ministry of SMEs and Startups in 2023
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference41 articles.
1. Zhang, Z., Song, Y., and Qi, H. (2017, January 21–26). Age progression/regression by conditional adversarial autoencoder. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 2. Wang, Z., Tang, X., Luo, W., and Gao, S. (2018, January 18–23). Face aging with identity-preserved conditional generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 3. Despois, J., Flament, F., and Perrot, M. (2020, January 23–28). AgingMapGAN (AMGAN): High-resolution controllable face aging with spatially-aware conditional GANs. Proceedings of the European Conference on Computer Vision, Glasgow, UK. 4. Song, J., Zhang, J., Gao, L., Liu, X., and Shen, H.T. (2018, January 13–19). Dual Conditional GANs for Face Aging and Rejuvenation. Proceedings of the 2018 International Joint Conference on Artificial Intelligence (IJCAI), Stockholm, Sweden. 5. Li, Q., Liu, Y., and Sun, Z. (2020, January 7–12). Age progression and regression with spatial attention modules. Proceedings of the AAAI Conference on Artificial Intelligence 2020, New York, NY, USA.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|