MCRPL: A Pretrain, Prompt, and Fine-tune Paradigm for Non-overlapping Many-to-one Cross-domain Recommendation

Author:

Liu Hao1ORCID,Guo Lei1ORCID,Zhu Lei1ORCID,Jiang Yongqiang2ORCID,Gao Min3ORCID,Yin Hongzhi4ORCID

Affiliation:

1. Shandong Normal University, Jinan, China

2. Kyoto University, Kyoto, Japan

3. Chongqing University, Chongqing, China

4. The University of Queensland, Brisbane, Australia

Abstract

Cross-domain Recommendation is the task that tends to improve the recommendations in the sparse target domain by leveraging the information from other rich domains. Existing methods of cross-domain recommendation mainly focus on overlapping scenarios by assuming users are totally or partially overlapped, which are taken as bridges to connect different domains. However, this assumption does not always hold, since it is illegal to leak users’ identity information to other domains. Conducting Non-overlapping MCR (NMCR) is challenging, since (1) the absence of overlapping information prevents us from directly aligning different domains, and this situation may get worse in the MCR scenario, and (2) the distribution between source and target domains makes it difficult for us to learn common information across domains. To overcome the above challenges, we focus on NMCR and devise MCRPL as our solution. To address Challenge 1, we first learn shared domain-agnostic and domain-dependent prompts and pre-train them in the pre-training stage. To address Challenge 2, we further update the domain-dependent prompts with other parameters kept fixed to transfer the domain knowledge to the target domain. We conduct experiments on five real-world domains, and the results show the advance of our MCRPL method compared with several recent SOTA baselines. Moreover, our source codes have been publicly released. 1

Funder

National Natural Science Foundation of China

Natural Science Foundation of Shandong Province

CCF-Baidu Open Fund

Australian Research Council Future Fellowship

Humanities and Social Sciences Fund of the Ministry of Education

Publisher

Association for Computing Machinery (ACM)

Reference65 articles.

1. Nawaf Alharbi and Doina Caragea. 2022. Cross-domain self-attentive sequential recommendations. In Proceedings of the International Conference on Data Science and Applications. 601–614.

2. Language models are few-shot learners;Brown Tom;Adv. Neural Inf. Process. Syst.,2020

3. Tong Chen, Hongzhi Yin, Hongxu Chen, Lin Wu, Hao Wang, Xiaofang Zhou, and Xue Li. 2018. Tada: Trend alignment with dual-attention multi-task recurrent neural networks for sales prediction. In Proceedings of the IEEE International Conference on Data Mining. IEEE, 49–58.

4. Yongjun Chen, Zhiwei Liu, Jia Li, Julian McAuley, and Caiming Xiong. 2022. Intent contrastive learning for sequential recommendation. In Proceedings of the ACM International World Wide Web Conferences. 491–502.

5. Qiang Cui, Tao Wei, Yafeng Zhang, and Qing Zhang. 2020. HeroGRAPH: A heterogeneous graph framework for multi-target cross-domain recommendation. In Proceedings of the ACM Conference on Recommender Systems. 1–7.

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Attention-Based Difficulty Feature Enhancement for Knowledge Tracing;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3