Affiliation:
1. Technion -- Israel Institute of Technology
Abstract
Modern discrete GPUs have been the processors of choice for accelerating compute-intensive applications, but using them in large-scale data processing is extremely challenging. Unfortunately, they do not provide important I/O abstractions long established in the CPU context, such as memory mapped files, which shield programmers from the complexity of buffer and I/O device management. However, implementing these abstractions on GPUs poses a problem: the limited GPU virtual memory system provides no address space management and page fault handling mechanisms to GPU developers, and does not allow modifications to memory mappings for running GPU programs.
We implement ActivePointers, a
software
address translation layer and paging system that introduces native support for page faults and virtual address space management to GPU programs, and enables the implementation of fully functional memory mapped files on commodity GPUs. Files mapped into GPU memory are accessed using
active pointers
, which behave like regular pointers but access the GPU page cache under the hood, and trigger page faults which are handled on the GPU. We design and evaluate a number of novel mechanisms, including a translation cache in hardware registers and translation aggregation for deadlock-free page fault handling of threads in a single warp.
We extensively evaluate ActivePointers on commodity NVIDIA GPUs using microbenchmarks, and also implement a complex image processing application that constructs a photo collage from a subset of 10 million images stored in a 40GB file. The GPU implementation maps the entire file into GPU memory and accesses it via active pointers. The use of active pointers adds only up to 1% to the application's runtime, while enabling speedups of up to 3.9× over a combined CPU+GPU implementation and 2.6× over a 12-core CPU-only implementation which uses AVX vector instructions.
Publisher
Association for Computing Machinery (ACM)
Reference33 articles.
1. GPUfs
2. GPUDirect "GPUDirect RDMA " http://docs.nvidia.com/cuda/gpudirect-rdma/index.html 2015. GPUDirect "GPUDirect RDMA " http://docs.nvidia.com/cuda/gpudirect-rdma/index.html 2015.
3. Fast GPU-based locality sensitive hashing for k-nearest neighbor computation
4. RankReduce--Processing K-Nearest Neighbor Queries on Top of MapReduce;Michel A. S. S.;Workshop on Large-Scale Distributed Systems for Information Retrieval,2010
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Getting a Handle on Unmanaged Memory;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3;2024-04-27
2. GPU Graph Processing on CXL-Based Microsecond-Latency External Memory;Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis;2023-11-12
3. G10: Enabling An Efficient Unified GPU Memory and Storage Architecture with Smart Tensor Migrations;56th Annual IEEE/ACM International Symposium on Microarchitecture;2023-10-28
4. GPU Performance Acceleration via Intra-Group Sharing TLB;Proceedings of the 52nd International Conference on Parallel Processing;2023-08-07
5. GPU-Initiated On-Demand High-Throughput Storage Access in the BaM System Architecture;Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2023-01-27