Affiliation:
1. School of Electrical, Electronic and Mechanical Engineering, University of Bristol, Bristol BS8 1UB, UK
2. Department of Electrical Engineering, University of Linköping, SE-581 83 Linköping, Sweden
3. School of Engineering Mathematics and Technology, University of Bristol, Bristol BS8 1TW, UK
Abstract
Graph neural networks (GNNs) are powerful models capable of managing intricate connections in non-Euclidean data, such as social networks, physical systems, chemical structures, and communication networks. Despite their effectiveness, the large-scale and complex nature of graph data demand substantial computational resources and high performance during both training and inference stages, presenting significant challenges, particularly in the context of embedded systems. Recent studies on GNNs have investigated both software and hardware solutions to enhance computational efficiency. Earlier studies on deep neural networks (DNNs) have indicated that methods like reconfigurable hardware and quantization are beneficial in addressing these issues. Unlike DNN research, studies on efficient computational methods for GNNs are less developed and require more exploration. This survey reviews the latest developments in quantization and FPGA-based acceleration for GNNs, showcasing the capabilities of reconfigurable systems (often FPGAs) to offer customized solutions in environments marked by significant sparsity and the necessity for dynamic load management. It also emphasizes the role of quantization in reducing both computational and memory demands through the use of fixed-point arithmetic and streamlined vector formats. This paper concentrates on low-power, resource-limited devices over general hardware accelerators and reviews research applicable to embedded systems. Additionally, it provides a detailed discussion of potential research gaps, foundational knowledge, obstacles, and prospective future directions.
Funder
T.C. Millî Eğitim Bakanlığı
Knut and Alice Wallenberg Foundation
Reference135 articles.
1. Geometric deep learning: Going beyond euclidean data;Bronstein;IEEE Signal Process. Mag.,2017
2. Graph neural networks: A review of methods and applications;Zhou;AI Open,2020
3. Graphs, convolutions, and neural networks: From graph filters to graph neural networks;Gama;IEEE Signal Process. Mag.,2020
4. Advances in distributed graph filtering;Coutino;IEEE Trans. Signal Process.,2019
5. Saad, L.B., and Beferull-Lozano, B. (2021, January 23–27). Quantization in graph convolutional neural networks. Proceedings of the 29th IEEE European Signal Processing Conference (EUSIPCO), Dublin, Ireland.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献