Affiliation:
1. Northeastern University, Department of Electrical and Computer Engineering, Boston, MA
Abstract
Cache memories are commonly used to bridge the gap between processor and memory speed. Caches provide fast access to a subset of memory. When a memory request does not find an address in the cache, a cache miss is incurred. In most commercial processors today, whenever a data cache read miss occurs, the processor will stall until the outstanding miss is serviced. This can severely degrade the overall system performance.To remedy this situation, non-blocking (lockup-free) caches can be employed. A non-blocking cache allows the processor to continue to perform useful work even in the presence of cache misses.This paper summarizes past work on lockup free caches, describing the four main design choices that have been proposed. A summary of the performance of these past studies is presented, followed by a discussion on potential speedup that the processor could obtain when using lockup free caches.
Publisher
Association for Computing Machinery (ACM)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Caching with Delayed Hits;Proceedings of the Annual conference of the ACM Special Interest Group on Data Communication on the applications, technologies, architectures, and protocols for computer communication;2020-07-30
2. Directed Statistical Warming through Time Traveling;Proceedings of the 52nd Annual IEEE/ACM International Symposium on Microarchitecture;2019-10-12
3. Recycling trash in cache;ACM SIGPLAN Notices;2016-01-28
4. Recycling trash in cache;Proceedings of the 2015 International Symposium on Memory Management;2015-06-14
5. Hybrid analytical modeling of pending cache hits, data prefetching, and MSHRs;ACM Transactions on Architecture and Code Optimization;2011-10