Abstract
Conventional on-chip TLB hierarchies are unable to fully cover the growing application working-set sizes. To make things worse, Last-Level TLB (LLT) misses require multiple accesses to the page table even with the use of page walk caches. Consequently, LLT misses incur long address translation latency and hurt performance. This article proposes two low-overhead hardware mechanisms for reducing the frequency and penalty of on-die LLT misses. The first,
Unified CAche and TLB (UCAT)
, enables the conventional on-die Last-Level Cache to store cache lines and TLB entries in a single unified structure and increases on-die TLB capacity significantly. The second,
DRAM-TLB
, memoizes virtual to physical address translations in DRAM and reduces LLT miss penalty when UCAT is unable to fully cover total application working-set. DRAM-TLB serves as the next larger level in the TLB hierarchy that significantly increases TLB coverage relative to on-chip TLBs. The combination of these two mechanisms,
DUCATI
, is an address translation architecture that improves GPU performance by 81%; (up to 4.5×) while requiring minimal changes to the existing system design. We show that DUCATI is within 20%, 5%, and 2% the performance of a perfect LLT system when using 4KB, 64KB, and 2MB pages, respectively.
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture,Information Systems,Software
Reference64 articles.
1. Page Placement Strategies for GPUs within Heterogeneous Memory Systems
2. ATS. 2009. PCI Express Address Translation Service. Retrieved from http://composter.com.ua/documents/ats_r1.1_26Jan09.pdf. ATS. 2009. PCI Express Address Translation Service. Retrieved from http://composter.com.ua/documents/ats_r1.1_26Jan09.pdf.
3. Translation caching
4. SpecTLB
5. Efficient virtual memory for big memory servers
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Barre Chord: Efficient Virtual Memory Translation for Multi-Chip-Module GPUs;2024 ACM/IEEE 51st Annual International Symposium on Computer Architecture (ISCA);2024-06-29
2. Direct Memory Translation for Virtualized Clouds;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2024-04-27
3. GPU Scale-Model Simulation;2024 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2024-03-02
4. GPU Performance Acceleration via Intra-Group Sharing TLB;Proceedings of the 52nd International Conference on Parallel Processing;2023-08-07
5. SnakeByte: A TLB Design with Adaptive and Recursive Page Merging in GPUs;2023 IEEE International Symposium on High-Performance Computer Architecture (HPCA);2023-02