VAE*: A Novel Variational Autoencoder via Revisiting Positive and Negative Samples for Top-N Recommendation

Author:

Liu Wei1ORCID,U Leong Hou2ORCID,Liang Shangsong3ORCID,Zhu Huaijie4ORCID,Yu Jianxing4ORCID,liu Yubao4ORCID,Yin Jian4ORCID

Affiliation:

1. Sun Yat-sen University, China and University of Macau, China

2. University of Macau, China

3. Sun Yat-sen University, China and Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates

4. Sun Yat-sen University, China

Abstract

Due to the easy access, implicit feedback is often used for recommender systems. Compared with point-wise learning and pair-wise learning methods, list-wise rank learning methods have superior performance for Top-N recommendation. Recent solutions, especially the list-wise methods, simply treat all interacted items of a user as equally important positives and annotate all no-interaction items of a user as negatives. For the list-wise approaches, we argue that this annotation scheme of implicit feedback is over-simplified due to the sparsity and missing fine-grained labels of the feedback data. To overcome this issue, we revisit the so-called positive and negative samples. Firstly, considering the loss function of list-wise ranking, we analyze the impact of false positives and negatives theoretically. Secondly, based on the observation, we propose a self-adjusting credibility weight mechanism to re-weigh the positive samples and exploit the higher-order relation based on item-item matrix to sample the critical negative samples. In order to prevent the introduction of noise, we design a pruning strategy for critical negatives. Besides, to combine the reconstruction loss function for the positive samples and critical negative samples, we develop a simple yet effective VAEs framework with linear structure, which abandons the complex nonlinear structure. Extensive experiments are conducted on 6 public real-world datasets. The results demonstrate that, our VAE* outperforms other VAEs-based models by a large margin. Besides, we also verify the effect of denoising positives and exploring critical negatives by ablation study.

Publisher

Association for Computing Machinery (ACM)

Reference60 articles.

1. Fabio Aiolli. 2013. Efficient top-n recommendation for very large scale binary rated datasets. In RecSys. 273–280.

2. Bahare Askari Jaroslaw Szlichta and Amirali Salehi-Abari. 2021. Variational Autoencoders for Top-K Recommendation with Implicit Feedback. In SIGIR. 2061–2065.

3. Latent Dirichlet allocation;Blei David M;Journal of Machine Learning Research 3,2003

4. Yifan Chen and Maarten de Rijke. 2018. A Collective Variational Autoencoder for Top-n Recommendation with Side Information. In Proceedings of the 3rd Workshop on Deep Learning for Recommender Systems. 3–9.

5. Minjin Choi Yoonki Jeong Joonseok Lee and Jongwuk Lee. 2021. Local Collaborative Autoencoders. In WSDM. 734–742.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3