GPOP

Author:

Lakhotia Kartik1,Kannan Rajgopal2,Pati Sourav1,Prasanna Viktor1

Affiliation:

1. University of Southern California, Los Angeles

2. US Army Research Lab, Los Angeles, CA

Abstract

The past decade has seen the development of many shared-memory graph processing frameworks intended to reduce the effort of developing high-performance parallel applications. However, many of these frameworks, based on Vertex-centric or Edge-centric paradigms suffer from several issues, such as poor cache utilization, irregular memory accesses, heavy use of synchronization primitives, or theoretical inefficiency, that deteriorate over-all performance and scalability. Recently, we proposed a cache and memory-efficient partition-centric paradigm for computing PageRank [26]. In this article, we generalize this approach to develop a novel Graph Processing Over Parts (GPOP) framework that is cache efficient, scalable, and work efficient. GPOP induces locality in memory accesses by increasing granularity of execution to vertex subsets called “parts,” thereby dramatically improving the cache performance of a variety of graph algorithms. It achieves high scalability by enabling completely lock and atomic free computation. GPOP’s built-in analytical performance model enables it to use a hybrid of source and part-centric communication modes in a way that ensures work efficiency each iteration, while simultaneously boosting high bandwidth sequential memory accesses. Finally, the GPOP framework is designed with programmability in mind. It completely abstracts away underlying parallelism and programming model details from the user and provides an easy to program set of APIs with the ability to selectively continue the active vertex set across iterations. Such functionality is useful for many graph algorithms but not intrinsically supported by the current frameworks. We extensively evaluate the performance of GPOP for a variety of graph algorithms, using several large datasets. We observe that GPOP incurs up to 9×, 6.8×, and 5.5× less L2 cache misses compared to Ligra, GraphMat, and Galois, respectively. In terms of execution time, GPOP is up to 19×, 9.3×, and 3.6× faster than Ligra, GraphMat, and Galois, respectively.

Funder

Defense Advanced Research Projects Agency

National Science Foundation

Publisher

Association for Computing Machinery (ACM)

Subject

Computational Theory and Mathematics,Computer Science Applications,Hardware and Architecture,Modelling and Simulation,Software

Reference55 articles.

Cited by 28 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Load Balanced PIM-Based Graph Processing;ACM Transactions on Design Automation of Electronic Systems;2024-06-21

2. Accelerating Graph Analytics Using Attention-Based Data Prefetcher;SN Computer Science;2024-06-13

3. Reordering and Compression for Hypergraph Processing;IEEE Transactions on Computers;2024-06

4. Accelerating SpMV for Scale-Free Graphs with Optimized Bins;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13

5. CGgraph: An Ultra-Fast Graph Processing System on Modern Commodity CPU-GPU Co-processor;Proceedings of the VLDB Endowment;2024-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3