Abstract
Deep-learning-based, cross-age face recognition has improved significantly in recent years. However, when using the discriminative method, it is still challenging to extract robust age-invariant features that can reduce the interference caused by age. In this paper, we propose a novel, effective, attention-based feature decomposition model, the age-invariant features extraction network, which can learn more discriminative feature representations and reduce the disturbance caused by aging. Our method uses an efficient channel attention block-based feature decomposition module to extract age-independent identity features from facial representations. Our end-to-end framework learns the age-invariant features directly, which is more convenient and can greatly reduce training complexity compared with existing multi-stage training methods. In addition, we propose a direct sum loss function to reduce the interference of age-related features. Our method achieves a comparable and stable performance. Experimental results demonstrate superior performance on four benchmarked datasets over the state-of-the-art. We obtain the relative improvements of 0.06%, 0.2%, and 2.2% on the cross-age datasets CACD-VS, AgeDB, and CALFW, respectively, and a relative 0.03% improvement on a general dataset LFW.
Funder
National Research Foundation of Korea
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference55 articles.
1. Saving Face: Regulating Law Enforcement’s Use of Mobile Facial Recognition Technology & Iris Scans;Lochner;Ariz. L. Rev.,2013
2. 50 years of biometric research: Accomplishments, challenges, and opportunities
3. Deepid3: Face recognition with very deep neural networks;Sun;arXiv,2015
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献