ActivePointers

Author:

Shahar Sagi1,Bergman Shai1,Silberstein Mark1

Affiliation:

1. Technion -- Israel Institute of Technology

Abstract

Modern discrete GPUs have been the processors of choice for accelerating compute-intensive applications, but using them in large-scale data processing is extremely challenging. Unfortunately, they do not provide important I/O abstractions long established in the CPU context, such as memory mapped files, which shield programmers from the complexity of buffer and I/O device management. However, implementing these abstractions on GPUs poses a problem: the limited GPU virtual memory system provides no address space management and page fault handling mechanisms to GPU developers, and does not allow modifications to memory mappings for running GPU programs. We implement ActivePointers, a software address translation layer and paging system that introduces native support for page faults and virtual address space management to GPU programs, and enables the implementation of fully functional memory mapped files on commodity GPUs. Files mapped into GPU memory are accessed using active pointers , which behave like regular pointers but access the GPU page cache under the hood, and trigger page faults which are handled on the GPU. We design and evaluate a number of novel mechanisms, including a translation cache in hardware registers and translation aggregation for deadlock-free page fault handling of threads in a single warp. We extensively evaluate ActivePointers on commodity NVIDIA GPUs using microbenchmarks, and also implement a complex image processing application that constructs a photo collage from a subset of 10 million images stored in a 40GB file. The GPU implementation maps the entire file into GPU memory and accesses it via active pointers. The use of active pointers adds only up to 1% to the application's runtime, while enabling speedups of up to 3.9× over a combined CPU+GPU implementation and 2.6× over a 12-core CPU-only implementation which uses AVX vector instructions.

Publisher

Association for Computing Machinery (ACM)

Reference33 articles.

1. GPUfs

2. GPUDirect "GPUDirect RDMA " http://docs.nvidia.com/cuda/gpudirect-rdma/index.html 2015. GPUDirect "GPUDirect RDMA " http://docs.nvidia.com/cuda/gpudirect-rdma/index.html 2015.

3. Fast GPU-based locality sensitive hashing for k-nearest neighbor computation

4. RankReduce--Processing K-Nearest Neighbor Queries on Top of MapReduce;Michel A. S. S.;Workshop on Large-Scale Distributed Systems for Information Retrieval,2010

Cited by 14 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Getting a Handle on Unmanaged Memory;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 3;2024-04-27

2. GPU Graph Processing on CXL-Based Microsecond-Latency External Memory;Proceedings of the SC '23 Workshops of The International Conference on High Performance Computing, Network, Storage, and Analysis;2023-11-12

3. G10: Enabling An Efficient Unified GPU Memory and Storage Architecture with Smart Tensor Migrations;56th Annual IEEE/ACM International Symposium on Microarchitecture;2023-10-28

4. GPU Performance Acceleration via Intra-Group Sharing TLB;Proceedings of the 52nd International Conference on Parallel Processing;2023-08-07

5. GPU-Initiated On-Demand High-Throughput Storage Access in the BaM System Architecture;Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2023-01-27

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3