Contrastive Learning via Local Activity
-
Published:2022-12-29
Issue:1
Volume:12
Page:147
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Zhu HeORCID, Chen Yang, Hu Guyue, Yu Shan
Abstract
Contrastive learning (CL) helps deep networks discriminate between positive and negative pairs in learning. As a powerful unsupervised pretraining method, CL has greatly reduced the performance gap with supervised training. However, current CL approaches mainly rely on sophisticated augmentations, a large number of negative pairs and chained gradient calculations, which are complex to use. To address these issues, in this paper, we propose the local activity contrast (LAC) algorithm, which is an unsupervised method based on two forward passes and locally defined loss to learn meaningful representations. The learning target of each layer is to minimize the activation value difference between two forward passes, effectively overcoming the limitations of applying CL above mentioned. We demonstrated that LAC could be a very useful pretraining method using reconstruction as the pretext task. Moreover, through pretraining with LAC, the networks exhibited competitive performance in various downstream tasks compared with other unsupervised learning methods.
Funder
National Key Research and Development Program of China the International Partnership Program of CAS the Strategic Priority Research Program of the Chinese Academy of Sciences CAS Project for Young Scientists in Basic Research Young Scientists Fund of the National Natural Science Foundation of China
Subject
Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering
Reference35 articles.
1. He, K., Fan, H., Wu, Y., Xie, S., and Girshick, R. (2020, January 13–19). Momentum Contrast for Unsupervised Visual Representation Learning. Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA. 2. Chen, T., Kornblith, S., Norouzi, M., and Hinton, G.E. (2020, January 13–18). A Simple Framework for Contrastive Learning of Visual Representations. Proceedings of the 37th International Conference on Machine Learning, ICML 2020, Virtual Event. 3. Chen, X., Fan, H., Girshick, R.B., and He, K. (2020). Improved Baselines with Momentum Contrastive Learning. arXiv. 4. Zhu, J., Liu, S., Yu, S., and Song, Y. (2022). An Extra-Contrast Affinity Network for Facial Expression Recognition in the Wild. Electronics, 11. 5. Zhao, D., Yang, J., Liu, H., and Huang, K. (2022). Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning. Electronics, 11.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|