Multi-Agent Semi-Siamese Training for Long-Tail and Shallow Face Learning

Author:

Tai Yichun1ORCID,Shi Hailin2ORCID,Zeng Dan1ORCID,Du Hang1ORCID,Hu Yibo2ORCID,Zhang Zicheng3ORCID,Zhang Zhijiang1ORCID,Mei Tao2ORCID

Affiliation:

1. Shanghai University

2. JD AI Research

3. University of Chinese Academy of Sciences

Abstract

With the recent development of deep convolutional neural networks and large-scale datasets, deep face recognition has made remarkable progress and been widely used in various applications. However, unlike the existing public face datasets, in many real-world scenarios of face recognition, the depth of the training dataset is shallow, which means that only two face images are available for each ID. With the non-uniform increase of samples, such issue is converted to a more general case, known as long-tail face learning, which suffers from data imbalance and intra-class diversity dearth simultaneously. These adverse conditions damage the training and result in the decline of model performance. Based on Semi-Siamese Training, we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST), to address these problems. MASST includes a probe network and multiple gallery agents—the former aims to encode the probe features, and the latter constitutes a stack of networks that encode the prototypes (gallery features). For each training iteration, the gallery network, which is sequentially rotated from the stack, and the probe network form a pair of Semi-Siamese networks. We give the theoretical and empirical analysis that, given the long-tail (or shallow) data and training loss, MASST smooths the loss landscape and satisfies the Lipschitz continuity with the help of multiple agents and the updating gallery queue. The proposed method is out of extra-dependency, and thus can be easily integrated with the existing loss functions and network architectures. It is worth noting that although multiple gallery agents are employed for training, only the probe network is needed for inference, without increasing the inference cost. Extensive experiments and comparisons demonstrate the advantages of MASST for long-tail and shallow face learning.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Networks and Communications,Hardware and Architecture

Reference79 articles.

1. Shaden Alshammari, Yuxiong Wang, Deva Ramanan, and Shu Kong. 2022. Long-tailed recognition via weight balancing. In Proceedings of the 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR’22). 6887–6897.

2. A systematic study of the class imbalance problem in convolutional neural networks;Buda Mateusz;Neural Networks,2018

3. Qiong Cao, Li Shen, Weidi Xie, Omkar M. Parkhi, and Andrew Zisserman. 2018. VGGFace2: A dataset for recognising faces across pose and age. In Proceedings of the 2018 13th IEEE International Conference on Automatic Face and Gesture Recognition (FG’18). IEEE, Los Alamitos, CA, 67–74.

4. SMOTE: Synthetic minority over-sampling technique;Chawla Nitesh V.;Journal of Artificial Intelligence Research,2002

5. Sheng Chen, Yang Liu, Xiang Gao, and Zhen Han. 2018. MobileFaceNets: Efficient CNNs for accurate real-time face verification on mobile devices. In Proceedings of the Chinese Conference on Biometric Recognition. 428–438.

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. DiffBFR: Bootstrapping Diffusion Model for Blind Face Restoration;Proceedings of the 31st ACM International Conference on Multimedia;2023-10-26

2. MRA-GNN: Minutiae Relation-Aware Model over Graph Neural Network for Fingerprint Embedding;2023 IEEE International Joint Conference on Biometrics (IJCB);2023-09-25

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3