Fight Fire with Fire: Towards Robust Graph Neural Networks on Dynamic Graphs via Actively Defense

Author:

Li Haoyang1,Di Shimin1,Li Calvin Hong Yi2,Chen Lei3,Zhou Xiaofang1

Affiliation:

1. HKUST

2. The Cigna Group

3. HKUST(GZ)&HKUST

Abstract

Graph neural networks (GNNs) have achieved great success on various graph tasks. However, recent studies have revealed that GNNs are vulnerable to injective attacks. Due to the openness of platforms, attackers can inject malicious nodes with carefully designed edges and node features, making GNNs misclassify the labels of target nodes. To resist such adversarial attacks, recent researchers propose GNN defenders. They assume that the attack patterns have been known, e.g., attackers tend to add edges between dissimilar nodes. Then, they remove edges between dissimilar nodes from attacked graphs, aiming to alleviate the negative impact of adversarial attacks. Nevertheless, on dynamic graphs, attackers can change their attack strategies at different times, making existing passive GNN defenders that are passively designed for specific attack patterns fail to resist attacks. In this paper, we propose a novel active GNN defender for dynamic graphs, namely ADGNN, which actively injects guardian nodes to protect target nodes from effective attacks. Specifically, we first formulate an active defense objective to design guardian node behaviors. This objective targets to disrupt the prediction of attackers and protect easily attacked nodes, thereby preventing attackers from generating effective attacks. Then, we propose a gradient-based algorithm with two acceleration techniques to optimize this objective. Extensive experiments on four real-world graph datasets demonstrate the effectiveness of our proposed defender and its capacity to enhance existing GNN defenders.

Publisher

Association for Computing Machinery (ACM)

Reference96 articles.

1. Amazon. 2024. amazon.com.au. https://www.amazon.com.au/. [Accessed 10-02-2024].

2. Elisa Bertino Gabriel Ghinita Ashish Kamra et al. 2011. Access control for databases: Concepts and systems. Foundations and Trends® in Databases 3 1--2 (2011) 1--148.

3. Aleksandr Beznosikov, Eduard Gorbunov, Hugo Berard, and Nicolas Loizou. 2023. Stochastic gradient descent-ascent: Unified theory and new efficient methods. In International Conference on Artificial Intelligence and Statistics. PMLR, 172--235.

4. Purpose based access control for privacy protection in relational database systems

5. A Restricted Black-Box Adversarial Framework Towards Attacking Graph Embedding Models

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3