Affiliation:
1. University of California, Davis
Abstract
For large-scale graph analytics on the GPU, the irregularity of data access/control flow and the complexity of programming GPUs have been two significant challenges for developing a programmable high-performance graph library. "Gunrock," our high-level bulk-synchronous graph-processing system targeting the GPU, takes a new approach to abstracting GPU graph analytics: rather than designing an abstraction around
computation
, Gunrock instead implements a novel
data-centric
abstraction centered on operations on a vertex or edge frontier. Gunrock achieves a balance between performance and expressiveness by coupling high-performance GPU computing primitives and optimization strategies with a high-level programming model that allows programmers to quickly develop new graph primitives with small code size and minimal GPU programming knowledge. We evaluate Gunrock on five graph primitives (BFS, BC, SSSP, CC, and PageRank) and show that Gunrock has on average at least an order of magnitude speedup over Boost and PowerGraph, comparable performance to the fastest GPU hardwired primitives, and better performance than any other GPU high-level graph library.
Funder
Defense Advanced Research Projects Agency
U.S. Army
UC Lab Fees Research Program Award
National Science Foundation
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Cited by
36 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Load Balanced PIM-Based Graph Processing;ACM Transactions on Design Automation of Electronic Systems;2024-06-21
2. Parallelization of butterfly counting on hierarchical memory;The VLDB Journal;2024-06-07
3. FuseIM: Fusing Probabilistic Traversals for Influence Maximization on Exascale Systems;Proceedings of the 38th ACM International Conference on Supercomputing;2024-05-30
4. Distributed Multi-GPU Community Detection on Exascale Computing Platforms;2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW);2024-05-27
5. RIMR: Reverse Influence Maximization Rank;2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW);2024-05-27