Affiliation:
1. University of Michigan, Ann Arbor, MI, USA
2. NVIDIA, Austin, TX, USA
Abstract
Systems from smartphones to supercomputers are increasingly heterogeneous, being composed of both CPUs and GPUs. To maximize cost and energy efficiency, these systems will increasingly use globally-addressable heterogeneous memory systems, making choices about memory page placement critical to performance. In this work we show that current page placement policies are not sufficient to maximize GPU performance in these heterogeneous memory systems. We propose two new page placement policies that improve GPU performance: one application agnostic and one using application profile information. Our application agnostic policy, bandwidth-aware (BW-AWARE) placement, maximizes GPU throughput by balancing page placement across the memories based on the aggregate memory bandwidth available in a system. Our simulation-based results show that BW-AWARE placement outperforms the existing Linux INTERLEAVE and LOCAL policies by 35% and 18% on average for GPU compute workloads. We build upon BW-AWARE placement by developing a compiler-based profiling mechanism that provides programmers with information about GPU application data structure access patterns. Combining this information with simple program-annotated hints about memory placement, our hint-based page placement approach performs within 90% of oracular page placement on average, largely mitigating the need for costly dynamic page tracking and migration.
Publisher
Association for Computing Machinery (ACM)
Reference51 articles.
1. T. M. Aamodt W. W. L. Fung I. Singh A. El-Shafiey J. Kwa T. Hetherington A. Gubran A. Boktor T. Rogers A. Bakhoda and H. Jooybar. GPGPU-Sim 3.x Manual. http://gpgpu-sim.org/manual/index.php/GPGPU-Sim_3.x_Manual 2014. {Online; accessed 4-December-2014}. T. M. Aamodt W. W. L. Fung I. Singh A. El-Shafiey J. Kwa T. Hetherington A. Gubran A. Boktor T. Rogers A. Bakhoda and H. Jooybar. GPGPU-Sim 3.x Manual. http://gpgpu-sim.org/manual/index.php/GPGPU-Sim_3.x_Manual 2014. {Online; accessed 4-December-2014}.
2. Handling the problems and opportunities posed by multiple on-chip memory controllers
3. Analyzing CUDA workloads using a detailed GPU simulator
4. Energy efficient Phase Change Memory based main memory for future high performance systems
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Trans-FW: Short Circuiting Page Table Walk in Multi-GPU Systems via Remote Forwarding;2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2023-02
2. GPS: A Global Publish-Subscribe Model for Multi-GPU Memory Management;MICRO-54: 54th Annual IEEE/ACM International Symposium on Microarchitecture;2021-10-17
3. Whirlpool;ACM SIGOPS Operating Systems Review;2016-03-25