Abstract
AbstractMost message passing neural networks (MPNNs) are widely used for assortative network representation learning under the assumption of homophily between connected nodes. However, this fundamental assumption is inconsistent with the heterophily of disassortative networks (DNs) in many real-world applications. Therefore, we propose a novel MPNN called NEDA based on neighborhood expansion for disassortative network representation learning (DNRL). Specifically, our NEDA first performs neighborhood expansion to seek more informative nodes for aggregation and then performs data augmentation to speed up the optimization process of a set of parameter matrices at the maximum available training data with minimal computational cost. To evaluate the performance of NEDA comprehensively, we perform several experiments on benchmark disassortative network datasets with variable sizes, where the results demonstrate the effectiveness of our NEDA model. The code is publicly available at https://github.com/xueyanfeng/NEDA.
Funder
National Natural Science Foundation of China
Scientific and Technologial Innovation Programs of Higher Education Institutions in Shanxi
Key Projects of Health Commission in Shanxi
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference52 articles.
1. Backstrom L, Boldi P, Rosa M, Ugander J, Vigna S (2012) Four degrees of separation. In: Proceedings of the 4th annual ACM web science conference, WebSci ’12. Association for Computing Machinery, New York, NY, USA, p 33-42. https://doi.org/10.1145/2380718.2380723
2. Bai WJ, Zhou T, Wang BH (2007) Immunization of susceptible-infected model on scale-free networks. Physica A 384(2):656–662
3. Barabasi AL (2016) Network science. Cambridge University Press, Cambridge
4. Bojchevski A, Günnemann S (2018) Deep gaussian embedding of graphs: unsupervised inductive learning via ranking. In: International conference on learning representations, p 1–13
5. Bojchevski A, Klicpera J, Perozzi B, Blais M, Kapoor A, Lukasik M, Günnemann S (2019) Is pagerank all you need for scalable graph neural networks? In: Proceedings of the 15th international workshop on mining and learning with graphs (MLG)