Abstract
AbstractThe graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale both in terms of graph size and the number of model parameters. Although some work has explored training on large-scale graphs, we pioneer efficient training of large-scale GCN models with the proposal of a novel, distributed training framework, called . disjointly partitions the parameters of a GCN model into several, smaller sub-GCNs that are trained independently and in parallel. Compatible with all GCN architectures and existing sampling techniques, (i) improves model performance, (ii) scales to training on arbitrarily large graphs, (iii) decreases wall-clock training time, and (iv) enables the training of markedly overparameterized GCN models. Remarkably, with , we train an astonishgly-wide 32–768-dimensional GraphSAGE model, which exceeds the capacity of a single GPU by a factor of $$8\times $$
8
×
, to SOTA performance on the Amazon2M dataset.
Funder
National Science Foundation
Publisher
Springer Science and Business Media LLC
Subject
Applied Mathematics,Computational Mathematics,Geometry and Topology
Reference62 articles.
1. Agarwal, A., Duchi, J.C.: Distributed delayed stochastic optimization. In: Proceedings of Advances in Neural Information Processing Systems (NeurIPS) (2011)
2. Balaban, A.T.: Applications of graph theory in chemistry. J. Chem. Inf. Comput. Sci. (1985)
3. Benkö, G., Flamm, C., Stadler, P.F.: A graph-based toy model of chemistry. J. Chem. Inf. Comput. Sci. (2003)
4. Ben-Nun, T., Hoefler, T.: Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. ACM Computing Surveys (CSUR) (2019)
5. Bergen, L., O’Donnell, T., Bahdanau, D.: Systematic generalization with edge transformers. Adv. Neural. Inf. Process. Syst. 34, 1390–1402 (2021)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献