Affiliation:
1. University of Utah, Salt Lake City, UT, USA
Abstract
Power consumption and DRAM latencies are serious concerns in modern chip-multiprocessor (CMP or multi-core) based compute systems. The management of the DRAM row buffer can significantly impact both power consumption and latency. Modern DRAM systems read data from cell arrays and populate a row buffer as large as 8 KB on a memory request. But only a small fraction of these bits are ever returned back to the CPU. This ends up wasting energy and time to read (and subsequently write back) bits which are used rarely. Traditionally, an open-page policy has been used for uni-processor systems and it has worked well because of spatial and temporal locality in the access stream. In future multi-core processors, the possibly independent access streams of each core are interleaved, thus destroying the available locality and significantly under-utilizing the contents of the row buffer. In this work, we attempt to improve row-buffer utilization for future multi-core systems.
The schemes presented here are motivated by our observations that a large number of accesses within heavily accessed OS pages are to small, contiguous "chunks" of cache blocks. Thus, the co-location of chunks (from different OS pages) in a row-buffer will improve the overall utilization of the row buffer contents, and consequently reduce memory energy consumption and access time. Such co-location can be achieved in many ways, notably involving a reduction in OS page size and software or hardware assisted migration of data within DRAM. We explore these mechanisms and discuss the trade-offs involved along with energy and performance improvements from each scheme. On average, for applications with room for improvement, our best performing scheme increases performance by 9% (max. 18%) and reduces memory energy consumption by 15% (max. 70%).
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference63 articles.
1. STREAM -- Sustainable Memory Bandwidth in High Performance Computers. http://www.cs.virginia.edu/stream/. STREAM -- Sustainable Memory Bandwidth in High Performance Computers. http://www.cs.virginia.edu/stream/.
2. Virtutech Simics Full System Simulator. http://www.virtutech.com. Virtutech Simics Full System Simulator. http://www.virtutech.com.
3. Java Server Benchmark 2005. Available at http://www.spec.org/jbb2005/. Java Server Benchmark 2005. Available at http://www.spec.org/jbb2005/.
4. BioBench: A Benchmark Suite of Bioinformatics Applications
Cited by
18 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Flatfish: A Reinforcement Learning Approach for Application-Aware Address Mapping;IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems;2022-11
2. Hybrid Refresh: Improving DRAM Performance by Handling Weak Rows Smartly;Proceedings of the 2022 International Symposium on Memory Systems;2022-10-03
3. Software-defined address mapping: a case on 3D memory;Proceedings of the 27th ACM International Conference on Architectural Support for Programming Languages and Operating Systems;2022-02-22
4. FIGARO: Improving System Performance via Fine-Grained In-DRAM Data Relocation and Caching;2020 53rd Annual IEEE/ACM International Symposium on Microarchitecture (MICRO);2020-10
5. Self-Adaptive Address Mapping Mechanism for Access Pattern Awareness on DRAM;2019 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Big Data & Cloud Computing, Sustainable Computing & Communications, Social Computing & Networking (ISPA/BDCloud/SocialCom/SustainCom);2019-12