Affiliation:
1. University of Science and Technology of China and State Key Laboratory of Cognitive Intelligence, Hefei, Anhui, P. R. China
2. Tianjin University, Tianjin, P. R. China
3. Tsinghua University, Shenzhen, P. R. China
Abstract
Graph Neural Networks (GNNs) such as Graph Convolutional Networks (GCNs) can effectively learn node representations via aggregating neighbors based on the relation graph. However, despite a few exceptions, most of the previous work in this line does not consider the topical semantics underlying the edges, making the node representations less effective and the learned relation between nodes hard to explain. For instance, the current GNNs make us usually don’t know what is the reason for the connection of network nodes, such as the specific research topics cited in this article and the concerns among friends on social platforms. Some methods have begun to explore the extraction of relation semantics in recent related literature, but existing studies generally face two bottlenecks, i.e., either being unable to explain the mined latent relations to ensure their reasonableness and independence, or demanding the textual content of edges which is unavailable in most real-world datasets. Actually, these two issues are both crucial in practical use. In our work, we propose a novel Topic-Disentangled Graph Neural Network (TDG) to address the above two issues at the same time, which explores the relation topics from the perspective of node contents. We design an optimized graph topic module to handle node features to construct independent and explainable semantic subspaces, then the reasonable relation topics that correspond to these subspaces are assigned to each graph relation via a neighborhood routing mechanism. Our proposed model can be easily combined with related graph tasks to form an end-to-end model, to avoid the risk of deviation between node representation space and task space. To evaluate the efficiency of our model, sufficient node-related tasks are conducted on three public datasets in the experimental section. The results show the obvious superiority of TDG compared with the state-of-the-art models.
Funder
National Natural Science Foundation of China
University Synergy Innovation Program of Anhui Province
Publisher
Association for Computing Machinery (ACM)
Reference78 articles.
1. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of the 3rd International Conference on Learning Representations (ICLR’15), Yoshua Bengio and Yann LeCun (Eds.). Retrieved from http://arxiv.org/abs/1409.0473.
2. Federico Baldassarre and Hossein Azizpour. 2019. Explainability techniques for graph convolutional networks. In Proceedings of the International Conference on Machine Learning (ICML) Workshops, 2019 Workshop on Learning and Reasoning with Graph-Structured Representations.
3. Hyperopt: A Python Library for Optimizing the Hyperparameters of Machine Learning Algorithms
4. Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep Gaussian embedding of graphs: Unsupervised inductive learning via ranking. In Proceedings of the 6th International Conference on Learning Representations. OpenReview.net. Retrieved from https://openreview.net/forum?id=r1ZdKJ-0W.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献