Shared Last-Level Cache Management and Memory Scheduling for GPGPUs with Hybrid Main Memory
-
Published:2018-07-31
Issue:4
Volume:17
Page:1-25
-
ISSN:1539-9087
-
Container-title:ACM Transactions on Embedded Computing Systems
-
language:en
-
Short-container-title:ACM Trans. Embed. Comput. Syst.
Author:
Wang Guan1ORCID,
Zang Chuanqi2,
Ju Lei2,
Zhao Mengying1,
Cai Xiaojun1,
Jia Zhiping1
Affiliation:
1. Shandong University, Qingdao, China
2. Shandong University, Ji’nan, China
Abstract
Memory intensive workloads become increasingly popular on general purpose graphics processing units (GPGPUs), and impose great challenges on the GPGPU memory subsystem design. On the other hand, with the recent development of non-volatile memory (NVM) technologies, hybrid memory combining both DRAM and NVM achieves high performance, low power, and high density simultaneously, which provides a promising main memory design for GPGPUs. In this article, we explore the shared last-level cache management for GPGPUs with consideration of the underlying hybrid main memory. To improve the overall memory subsystem performance, we exploit the characteristics of both the asymmetric read/write latency of the hybrid main memory architecture, as well as the memory coalescing feature of GPGPUs. In particular, to reduce the average cost of L2 cache misses, we prioritize cache blocks from DRAM or NVM based on observations that operations to NVM part of main memory have a large impact on the system performance. Furthermore, the cache management scheme also integrates the GPU memory coalescing and cache bypassing techniques to improve the overall system performance. To minimize the impact of memory divergence behaviors among simultaneously executed groups of threads, we propose a hybrid main memory and warp aware memory scheduling mechanism for GPGPUs. Experimental results show that in the context of a hybrid main memory system, our proposed L2 cache management policy and memory scheduling mechanism improve performance by 15.69% on average for memory intensive benchmarks, whereas the maximum gain can be up to 29% and achieve an average memory subsystem energy reduction of 21.27%.
Funder
National Key R8D Program of China
Shandong Provincial Natural Science Foundation
State Key Program of NSFC
Research and Application of Key Technology for Intelligent Dispatching and Security Early-Warning of Large Power Grid
Young Scholars Program of Shandong University
State Grid Corporation of China
Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Software