Feature Equilibrium: An Adversarial Training Method to Improve Representation Learning

Author:

Liu MinghuiORCID,Yang Meiyi,Deng Jiali,Cheng Xuan,Xie Tianshu,Deng Pan,Gong Haigang,Liu Ming,Wang Xiaomin

Abstract

AbstractOver-fitting is a significant threat to the integrity and reliability of deep neural networks with generous parameters. One problem is that the model learned more specific features than general features in the training process. To solve the problem, we propose an adversarial training method to assist the model in strengthening general representation learning. In this method, we make a classification model as a generator G and introduce an unsupervised discriminator D to distinguish the hidden feature of the classification model from real images to limit their spatial distance. Notably, the D will fall into the trap of a perfect discriminator resulting in the gradient of confrontation loss of 0 after overtraining. To avoid this situation, we train the D with a probability $$P_{c}$$ P c . Our proposed method is easy to incorporate into existing frameworks. It has been evaluated under various network architectures over different fields of datasets. Experiments show that this method, under low computational cost, outperforms the benchmark by 1.5–2 points on different datasets. For semantic segmentation on VOC, our proposed method achieves 2.2 points higher mAP.

Funder

the Science and Technology Program of Quzhou

the Science and Technology Program of Zhejiang

Publisher

Springer Science and Business Media LLC

Subject

Computational Mathematics,General Computer Science

Reference33 articles.

1. Alexey, D., Philipp, F., et al.: Discriminative unsupervised feature learning with exemplar convolutional neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(9), 1734–1747 (2016). https://doi.org/10.1109/TPAMI.2015.2496141

2. Arjovsky, M., Bottou, L.: Towards principled methods for training generative adversarial networks. In: International Conference on Learning Representations, Toulon, France (2017)

3. Cook, J.A., Ranstam, J.: Overfitting. BJS Stat. Nugget 103(13), 1814 (2016). https://doi.org/10.1002/bjs.10244

4. Dai, Z., Yang, Z., et al.: Good semi-supervised learning that requires a bad gan. Adv. Neural Inf. Process. Syst. 30, 6510–6520 (2017)

5. Dan, H., Thomas, D.: Benchmarking neural network robustness to common corruptions and perturbations. In: International Conference on Learning Representations (ICLR) (2019)

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3