LMACL: Improving Graph Collaborative Filtering with Learnable Model Augmentation Contrastive Learning
-
Published:2024-06-19
Issue:7
Volume:18
Page:1-24
-
ISSN:1556-4681
-
Container-title:ACM Transactions on Knowledge Discovery from Data
-
language:en
-
Short-container-title:ACM Trans. Knowl. Discov. Data
Author:
Liu Xinru1ORCID,
Hao Yongjing1ORCID,
Zhao Lei1ORCID,
Liu Guanfeng2ORCID,
Sheng Victor S.3ORCID,
Zhao Pengpeng1ORCID
Affiliation:
1. Soochow University, Suzhou, China
2. Macquarie University, Sydney, Australia
3. Texas Tech University, Lubbock United States
Abstract
Graph collaborative filtering (GCF) has achieved exciting recommendation performance with its ability to aggregate high-order graph structure information. Recently, contrastive learning (CL) has been incorporated into GCF to alleviate data sparsity and noise issues. However, most of the existing methods employ random or manual augmentation to produce contrastive views that may destroy the original topology and amplify the noisy effects. We argue that such augmentation is insufficient to produce the optimal contrastive view, leading to suboptimal recommendation results. In this article, we proposed a
L
earnable
M
odel
A
ugmentation
C
ontrastive
L
earning (LMACL) framework for recommendation, which effectively combines graph-level and node-level collaborative relations to enhance the expressiveness of collaborative filtering (CF) paradigm. Specifically, we first use the graph convolution network (GCN) as a backbone encoder to incorporate multi-hop neighbors into graph-level original node representations by leveraging the high-order connectivity in user-item interaction graphs. At the same time, we treat the multi-head graph attention network (GAT) as an augmentation view generator to adaptively generate high-quality node-level augmented views. Finally, joint learning endows the end-to-end training fashion. In this case, the mutual supervision and collaborative cooperation of GCN and GAT achieves learnable model augmentation. Extensive experiments on several benchmark datasets demonstrate that LMACL provides a significant improvement over the strongest baseline in terms of
Recall
and
NDCG
by 2.5%–3.8% and 1.6%–4.0%, respectively. Our model implementation code is available at
https://github.com/LiuHsinx/LMACL
.
Funder
National Natural Science Foundation of China
National Key Research and Development Program of China
Universities of Jiangsu Province
Suzhou Science and Technology Development Program
Priority Academic Program Development of Jiangsu Higher Education Institutions
Publisher
Association for Computing Machinery (ACM)
Reference60 articles.
1. Model-Based Collaborative Filtering
2. Relational graph attention networks;Busbridge Dan;arXiv preprint arXiv:1904.05811,2019
3. LightGCL: Simple yet effective graph contrastive learning for recommendation;Cai Xuheng;arXiv preprint arXiv:2302.08191,2023
4. Graph Heterogeneous Multi-Relational Recommendation
5. Attentive Collaborative Filtering