Distributed Graph Neural Network Training: A Survey
-
Published:2024-04-10
Issue:8
Volume:56
Page:1-39
-
ISSN:0360-0300
-
Container-title:ACM Computing Surveys
-
language:en
-
Short-container-title:ACM Comput. Surv.
Author:
Shao Yingxia1ORCID, Li Hongzheng1ORCID, Gu Xizhi1ORCID, Yin Hongbo1ORCID, Li Yawen1ORCID, Miao Xupeng2ORCID, Zhang Wentao3ORCID, Cui Bin4ORCID, Chen Lei5ORCID
Affiliation:
1. Beijing University of Posts and Telecommunications, Beijing, China 2. Carnegie Mellon University, Pittsburgh, USA 3. Mila – Québec AI Institute, HEC Montréal, Montreal, Canada 4. Peking University, Beijing, China 5. The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China
Abstract
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains. Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs. As a remedy, distributed computing becomes a promising solution of training large-scale GNNs, since it is able to provide abundant computing resources. However, the dependency of graph structure increases the difficulty of achieving high-efficiency distributed GNN training, which suffers from the massive communication and workload imbalance. In recent years, many efforts have been made on distributed GNN training, and an array of training algorithms and systems have been proposed. Yet, there is a lack of systematic review of the optimization techniques for the distributed execution of GNN training. In this survey, we analyze three major challenges in distributed GNN training: massive feature communication, the loss of model accuracy, and workload imbalance. Then, we introduce a new taxonomy for the optimization techniques in distributed GNN training that address the above challenges. The new taxonomy classifies existing techniques into four categories: GNN data partition, GNN batch generation, GNN execution model, and GNN communication protocol. We carefully discuss the techniques in each category. In the conclusion, we summarize existing distributed GNN systems for multi–graphics processing units (GPUs), GPU-clusters and central processing unit (CPU)-clusters, respectively, and present a discussion about the future direction of distributed GNN training.
Funder
National Science and Technology Major Project National Natural Science Foundation of China Beijing Nova Program Xiaomi Young Talents Program National Science Foundation of China Hong Kong RGC GRF Project CRF Project AOE Project RIF Project Theme-based project Guangdong Basic and Applied Basic Research Foundation Hong Kong ITC ITF Microsoft Research Asia Collaborative Research Grant, HKUST-Webank joint research lab grant and HKUST Global Strategic Partnership Fund
Publisher
Association for Computing Machinery (ACM)
Reference174 articles.
1. Computing Graph Neural Networks: A Survey from Algorithms to Accelerators 2. Martin Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, Manjunath Kudlur, Josh Levenberg, Rajat Monga, Sherry Moore, Derek G. Murray, Benoit Steiner, Paul Tucker, Vijay Vasudevan, Pete Warden, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. 2016. TensorFlow: A system for large-scale machine learning. In 12th USENIX Symposium on Operating Systems Design and Implementation. 265–283. 3. Graph-Based Deep Learning for Medical Diagnosis and Analysis: Past, Present and Future 4. Distributed training of graph convolutional networks using subgraph approximation;Angerd Alexandra;arXiv preprint arXiv:2012.04930,2020 5. Binary Graph Neural Networks
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|