Affiliation:
1. PLA Strategic Support Force Information Engineering University, Zhengzhou, China
2. North China University of Water Resources and Electric Power, Zhengzhou, China
Abstract
With the continuous escalation of telecommunication fraud modes, telecommunication fraud is becoming more and more concealed and disguised. Existing Graph Neural Networks (GNNs)-based fraud detection methods directly aggregate the neighbor features of target nodes as their own updated features, which preserves the commonality of neighbor features but ignores the differences with target nodes. This makes it difficult to effectively distinguish fraudulent users from normal users. To address this issue, a new model named Feature Difference-aware Graph Neural Network (FDAGNN) is proposed for detecting telecommunication fraud. FDAGNN first calculates the feature differences between target nodes and their neighbors, then adopts GAT method to aggregate these feature differences, and finally uses GRU approach to fuse the original features of target nodes and the aggregated feature differences as the updated features of target nodes. Extensive experiments on two real-world telecom datasets demonstrate that FDAGNN outperforms seven baseline methods in the majority of metrics, with a maximum improvement of about 5%.
Subject
Artificial Intelligence,General Engineering,Statistics and Probability
Reference42 articles.
1. China Academy of Information and Communications Technology Research report on telecommunication network fraud management under the new situation, (2020).
2. A survey on network embedding;Cui;IEEE Transactions on Knowledge and Data Engineering,2018
3. Extended resource allocation index for link prediction of complex network;Liu;Physica A: Statistical Mechanics and its Applications,2017
4. Similarity indices based on link weight assignment for link prediction of unweighted complex networks;Liu;International Journal of Modern Physics B,2017
5. Kipf T.N. and Welling M. , Semi-supervised classification with graph convolutional networks, arXiv preprint arXiv:1609.02907 (2016).