Affiliation:
1. Georgia Institute of Technology
Abstract
Exclusive last-level caches (LLCs) reduce memory accesses by effectively utilizing cache capacity. However, they require excessive on-chip bandwidth to support frequent insertions of cache lines on eviction from upper-level caches. Non-inclusive caches, on the other hand, have the advantage of using the on-chip bandwidth more effectively but suffer from a higher miss rate. Traditionally, the decision to use the cache as exclusive or non-inclusive is made at design time. However, the best option for a cache organization depends on application characteristics, such as working set size and the amount of traffic consumed by LLC insertions.
This paper proposes FLEXclusion, a design that dynamically selects between exclusion and non-inclusion depending on workload behavior. With FLEXclusion, the cache behaves like an exclusive cache when the application benefits from extra cache capacity, and it acts as a non-inclusive cache when additional cache capacity is not useful, so that it can reduce on-chip bandwidth. FLEXclusion leverages the observation that both non-inclusion and exclusion rely on similar hardware support, so our proposal can be implemented with negligible hardware changes. Our evaluations show that a FLEXclusive cache reduces the on-chip LLC insertion traffic by 72.6% compared to an exclusive design and improves performance by 5.9% compared to a non-inclusive design.
Publisher
Association for Computing Machinery (ACM)
Reference25 articles.
1. AMD Phenom(#8482;) II processor model. http://www.amd.com/us/products/desktop/processors/phenomii/Pages.phenon-ii-key-architectural-features.aspx. AMD Phenom (#8482;) II processor model. http://www.amd.com/us/products/desktop/processors/phenomii/Pages.phenon-ii-key-architectural-features.aspx.
2. Macsim simulator http://code.google.eom/p/macsim/. Macsim simulator http://code.google.eom/p/macsim/.
3. Interconnect-Aware Coherence Protocols for Chip Multiprocessors
4. Bypass and insertion algorithms for exclusive last-level caches
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. HBPB, applying reuse distance to improve cache efficiency proactively;Journal of Parallel and Distributed Computing;2024-09
2. Exclusive Hierarchies for Predictable Sharing in Last-Level Cache;2024 IEEE 30th Real-Time and Embedded Technology and Applications Symposium (RTAS);2024-05-13
3. Avoiding Unnecessary Caching with History-Based Preemptive Bypassing;2022 IEEE 34th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD);2022-11
4. Reducing Data Movement and Energy in Multilevel Cache Hierarchies without Losing Performance: Can you have it all?;2019 28th International Conference on Parallel Architectures and Compilation Techniques (PACT);2019-09
5. A Survey of End-System Optimizations for High-Speed Networks;ACM Computing Surveys;2019-05-31