Affiliation:
1. University of Science and Technology of China
Abstract
Abstract
Recommender systems are often susceptible to well-crafted fake profiles, leading to biased recommendations. Among existing defense methods, data-processing based methods inevitably exclude normal samples, while model-based methods struggle to enjoy both generalization and robustness. To this end, we suggest integrating data processing and the robust model to propose a general framework, Triple Cooperative Defense (TCD), which employs three cooperative models that mutually enhance data and thereby improve recommendation robustness. Furthermore, Considering that existing attacks struggle to balance bi-level optimization and efficiency, we revisit poisoning attacks in recommender systems and introduce an efficient attack strategy, Co-training Attack (Co-Attack), which cooperatively optimizes the attack optimization and model training, considering the bi-level setting while maintaining attack efficiency. Moreover, we reveal a potential reason for the insufficient threat of existing attacks is their default assumption of optimizing attacks in undefended scenarios. This overly optimistic setting limits the potential of attacks. Consequently, we put forth a Game-based Co-training Attack (GCoAttack), which frames the proposed CoAttack and TCD as a game-theoretic process, thoroughly exploring CoAttack’s attack potential in the cooperative training of attack and defense. Extensive experiments on three real datasets demonstrate TCD’s superiority in enhancing model robustness.
Additionally, we verify that the two proposed attack strategies significantly out perform existing attacks, with game-based GCoAttack posing a greater poisoning threat than CoAttack.
Publisher
Research Square Platform LLC
Reference70 articles.
1. Athalye, Anish and Carlini, Nicholas and Wagner, David (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. 274--283, PMLR, ICML
2. Raghunathan, Aditi and Xie, Sang Michael and Yang, Fanny and Duchi, John C and Liang, Percy (2019) Adversarial training can hurt generalization. arXiv preprint arXiv:1906.06032
3. Nguyen Thanh, Toan and Quach, Nguyen Duc Khang and Nguyen, Thanh Tam and Huynh, Thanh Trung and Vu, Viet Hung and Nguyen, Phi Le and Jo, Jun and Nguyen, Quoc Viet Hung (2023) Poisoning GNN-based recommender systems with generative surrogate-based attacks. ACM Transactions on Information Systems 41(3): 1--24 ACM New York, NY
4. Wang, Qingyang and Lian, Defu and Wu, Chenwang and Chen, Enhong (2022) Towards Robust Recommender Systems via Triple Cooperative Defense. Springer, 564--578, Web Information Systems Engineering--WISE 2022: 23rd International Conference, Biarritz, France, November 1--3, 2022, Proceedings
5. Huang, Hai and Mu, Jiaming and Gong, Neil Zhenqiang and Li, Qi and Liu, Bin and Xu, Mingwei (2021) Data poisoning attacks to deep learning based recommender systems. arXiv preprint arXiv:2101.02644