Affiliation:
1. National University of Defense Technology, ChangSha, China
Abstract
Graph Neural Networks (GNNs) have achieved promising performance in semi-supervised node classification in recent years. However, the problem of insufficient supervision, together with representation collapse, largely limits the performance of the GNNs in this field. To alleviate the collapse of node representations in semi-supervised scenario, we propose a novel graph contrastive learning method, termed
M
ixed
G
raph
C
ontrastive
N
etwork (MGCN). In our method, we improve the discriminative capability of the latent embeddings by an interpolation-based augmentation strategy and a correlation reduction mechanism. Specifically, we first conduct the interpolation-based augmentation in the latent space and then force the prediction model to change linearly between samples. Second, we enable the learned network to tell apart samples across two interpolation-perturbed views through forcing the correlation matrix across views to approximate an identity matrix. By combining the two settings, we extract rich supervision information from both the abundant unlabeled nodes and the rare yet valuable labeled nodes for discriminative representation learning. Extensive experimental results on six datasets demonstrate the effectiveness and the generality of MGCN compared to the existing state-of-the-art methods. The code of MGCN is available at https://github.com/xihongyang1999/MGCN on Github.
Publisher
Association for Computing Machinery (ACM)
Reference84 articles.
1. Rademacher and Gaussian complexities: Risk bounds and structural results;Bartlett L;Journal of Machine Learning Research 3,2002
2. Christopher Beckham, Sina Honari, Vikas Verma, Alex M Lamb, Farnoosh Ghadiri, R Devon Hjelm, Yoshua Bengio, and Chris Pal. 2019. On adversarial mixup resynthesis. Advances in neural information processing systems 32 (2019).
3. Piotr Bielak Tomasz Kajdanowicz and Nitesh V Chawla. 2021. Graph Barlow Twins: A self-supervised representation learning framework for graphs. arXiv preprint arXiv:2106.02466(2021).
4. Fast Variational AutoEncoder with Inverted Multi-Index for Collaborative Filtering
5. Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever;Chen Jin;Advances in Neural Information Processing Systems,2022