Affiliation:
1. National University of Defense Technology, Changsha 410073, P.R.C. zhouxianchen13@nudt.edu.cn
2. National University of Defense Technology, Changsha 410073, P.R.C. wanghongxia@nudt.edu.cn
Abstract
Abstract
Graph convolutional network (GCN) is a powerful deep model in dealing with graph data. However, the explainability of GCN remains a difficult problem since the training behaviors for graph neural networks are hard to describe. In this work, we show that for GCN with wide hidden feature dimension, the output for semisupervised problem can be described by a simple differential equation. In addition, the dynamics of the behavior of output is decided by the graph convolutional neural tangent kernel (GCNTK), which is stable when the width of hidden feature tends to be infinite. And the solution of node classification can be explained directly by the differential equation for a semisupervised problem. The experiments on some toy models speak to the consistency of the GCNTK model and GCN.
Subject
Cognitive Neuroscience,Arts and Humanities (miscellaneous)
Reference42 articles.
1. A convergence theory for deep learning via over-parameterization;Allen-Zhu,2019
2. Gradient descent with identity initialization efficiently learns positive definite linear transformations by deep residual networks;Bartlett,2018
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献