Affiliation:
1. Sun Yat-sen University, China and University of Macau, China
2. University of Macau, China
3. Sun Yat-sen University, China and Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates
4. Sun Yat-sen University, China
Abstract
Due to the easy access, implicit feedback is often used for recommender systems. Compared with point-wise learning and pair-wise learning methods, list-wise rank learning methods have superior performance for Top-N recommendation. Recent solutions, especially the list-wise methods, simply treat all interacted items of a user as equally important positives and annotate all no-interaction items of a user as negatives. For the list-wise approaches, we argue that this annotation scheme of implicit feedback is over-simplified due to the sparsity and missing fine-grained labels of the feedback data. To overcome this issue, we revisit the so-called positive and negative samples. Firstly, considering the loss function of list-wise ranking, we analyze the impact of false positives and negatives theoretically. Secondly, based on the observation, we propose a self-adjusting credibility weight mechanism to re-weigh the positive samples and exploit the higher-order relation based on item-item matrix to sample the critical negative samples. In order to prevent the introduction of noise, we design a pruning strategy for critical negatives. Besides, to combine the reconstruction loss function for the positive samples and critical negative samples, we develop a simple yet effective VAEs framework with linear structure, which abandons the complex nonlinear structure. Extensive experiments are conducted on 6 public real-world datasets. The results demonstrate that, our VAE* outperforms other VAEs-based models by a large margin. Besides, we also verify the effect of denoising positives and exploring critical negatives by ablation study.
Publisher
Association for Computing Machinery (ACM)