Affiliation:
1. William & Mary, Williamsburg, VA, USA
2. Jefferson Lab, Newport News, VA, USA
Abstract
The
many-body correlation function
is a fundamental computation kernel in modern physics computing applications, e.g., Hadron Contractions in Lattice quantum chromodynamics (QCD). This kernel is both computation and memory intensive, involving a series of tensor contractions, and thus usually runs on accelerators like GPUs. Existing optimizations on many-body correlation mainly focus on individual tensor contractions (e.g., cuBLAS libraries and others). In contrast, this work discovers a new optimization dimension for many-body correlation by exploring the optimization opportunities among tensor contractions. More specifically, it targets general GPU architectures (both NVIDIA and AMD) and optimizes many-body correlation’s memory management by exploiting a set of
memory allocation and communication redundancy elimination
opportunities: first,
GPU memory allocation redundancy
: the intermediate output frequently occurs as input in the subsequent calculations; second,
CPU-GPU communication redundancy
: although all tensors are allocated on both CPU and GPU, many of them are used (and reused) on the GPU side only, and thus, many CPU/GPU communications (like that in existing Unified Memory designs) are unnecessary; third,
GPU oversubscription:
limited GPU memory size causes oversubscription issues, and existing memory management usually results in near-reuse data eviction, thus incurring extra CPU/GPU memory communications.
Targeting these memory optimization opportunities, this article proposes MemHC, an optimized systematic GPU memory management framework that aims to accelerate the calculation of many-body correlation functions utilizing a series of new memory reduction designs. These designs involve optimizations for GPU memory allocation, CPU/GPU memory movement, and GPU memory oversubscription, respectively. More specifically, first, MemHC employs duplication-aware management and lazy release of GPU memories to corresponding host managing for better data reusability. Second, it implements data reorganization and on-demand synchronization to eliminate redundant (or unnecessary) data transfer. Third, MemHC exploits an optimized Least Recently Used (LRU) eviction policy called Pre-Protected LRU to reduce evictions and leverage memory hits. Additionally, MemHC is portable for various platforms including NVIDIA GPUs and AMD GPUs. The evaluation demonstrates that MemHC outperforms unified memory management by
\( 2.18\times \)
to
\( 10.73\times \)
. The proposed Pre-Protected LRU policy outperforms the original LRU policy by up to
\( 1.36\times \)
improvement.
1
Funder
NSF
US Department Of Energy, Office of Science, Offices of Nuclear Physics and Advanced Scientific Computing Research, through the SciDAC program
Jefferson Lab
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Information Systems,Software
Reference51 articles.
1. NVIDIA. 2015. http://docs.nvidia.com/cuda/cublas/.
2. High-performance Tensor Contractions for GPUs
3. Neha Agarwal, David Nellans, Eiman Ebrahimi, Thomas F. Wenisch, John Danskin, and Stephen W. Keckler. Selective GPU caches to eliminate CPU-GPU HW cache coherence. In 2016 IEEE HPCA.
4. Rachata Ausavarungnirun Joshua Landgraf Vance Miller Saugata Ghose Jayneel Gandhi Christopher J. Rossbach and Onur Mutlu. 2017. Mosaic: A GPU memory manager with application-transparent support for multiple page sizes. In Proceedings of the 50th Annual IEEE/ACM International Symposium on Microarchitecture (MICRO) . 136–150.
5. Sorav Bansal and Dharmendra S. Modha. 2004. CAR: Clock with adaptive replacement. In FAST.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. A perceptual and predictive batch-processing memory scheduling strategy for a CPU-GPU heterogeneous system;Frontiers of Information Technology & Electronic Engineering;2023-07
2. Graph Contractions for Calculating Correlation Functions in Lattice QCD;Proceedings of the Platform for Advanced Scientific Computing Conference;2023-06-26
3. iQAN;Proceedings of the 28th ACM SIGPLAN Annual Symposium on Principles and Practice of Parallel Programming;2023-02-21