Abstract
AbstractIn recent years, Graph Neural Networks (GNNs) have achieved excellent applications in classification or prediction tasks. Recent studies have demonstrated that GNNs are vulnerable to adversarial attacks. Graph Modification Attack (GMA) and Graph Injection Attack (GIA) are commonly attack strategies. Most graph adversarial attack methods are based on GMA, which has a clear drawback: the attacker needs high privileges to modify the original graph, making it difficult to execute in practice. GIA can perform attacks without modifying the original graph. However, many GIA models fail to take care of attack invisibility, i.e., fake nodes can be easily distinguished from the original nodes. To solve the above issue, we propose an imperceptible graph injection attack, named IMGIA. Specifically, IMGIA uses the normal distribution sampling and mask learning to generate fake node features and links respectively, and then uses the homophily unnoticeability constraint to improve the camouflage of the attack. Our extensive experiments on three benchmark datasets demonstrate that IMGIA performs better than the existing state-of-the-art GIA methods. As an example, IMGIA shows an improvement in performance with an average increase in effectiveness of 2%.
Funder
National Key Research and Development Program of China
Construction Project for Innovation Platform of Qinghai Province
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,Engineering (miscellaneous),Information Systems,Artificial Intelligence
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献