Affiliation:
1. IBM Research
2. Seoul National University
3. IBM Research and TU Delft
Abstract
In this paper, we propose ExtraV, a framework for near-storage graph processing. It is based on the novel concept of
graph virtualization
, which efficiently utilizes a cache-coherent hardware accelerator at the storage side to achieve performance and flexibility at the same time. ExtraV consists of four main components: 1) host processor, 2) main memory, 3) AFU (Accelerator Function Unit) and 4) storage. The AFU, a hardware accelerator, sits between the host processor and storage. Using a coherent interface that allows main memory accesses, it performs graph traversal functions that are common to various algorithms while the program running on the host processor (called the host program) manages the overall execution along with more application-specific tasks. Graph virtualization is a high-level programming model of graph processing that allows designers to focus on algorithm-specific functions. Realized by the accelerator, graph virtualization gives the host programs an illusion that the graph data reside on the main memory in a layout that fits with the memory access behavior of host programs even though the graph data are actually stored in a multi-level, compressed form in storage. We prototyped ExtraV on a Power8 machine with a CAPI-enabled FPGA. Our experiments on a real system prototype offer significant speedup compared to state-of-the-art software only implementations.
Subject
General Earth and Planetary Sciences,Water Science and Technology,Geography, Planning and Development
Cited by
48 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. ECG: Expressing Locality and Prefetching for Optimal Caching in Graph Structures;2024 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW);2024-05-27
2. Data Motion Acceleration: Chaining Cross-Domain Multi Accelerators;2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2024-03-02
3. Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System;2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2024-03-02
4. Integrating FPGA-based hardware acceleration with relational databases;Parallel Computing;2024-02
5. Fusing In-storage and Near-storage Acceleration of Convolutional Neural Networks;ACM Journal on Emerging Technologies in Computing Systems;2023-11-14