Affiliation:
1. EcoCloud, EPFL
2. Huawei
3. University of Edinburgh
4. FORTH-ICS & ECE-TUC
Abstract
With mainstream technologies to couple logic tightly with memory on the horizon, near-memory processing has re-emerged as a promising approach to improving performance and energy for data-centric computing. DRAM, however, is primarily designed for density and low cost, with a rigid internal organization that favors coarse-grain streaming rather than byte-level random access. This paper makes the case that treating DRAM as a block-oriented streaming device yields significant efficiency and performance benefits, which motivate for algorithm/architecture co-design to favor streaming access patterns, even at the price of a higher order algorithmic complexity. We present the Mondrian Data Engine that drastically improves the runtime and energy efficiency of basic in-memory analytic operators, despite doing more work as compared to traditional CPU-optimized algorithms, which heavily rely on random accesses and deep cache hierarchies
Publisher
Association for Computing Machinery (ACM)
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Improved Computation of Database Operators via Vector Processing Near-Data;2023 IEEE 35th International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD);2023-10-17
2. A survey on processing-in-memory techniques: Advances and challenges;Memories - Materials, Devices, Circuits and Systems;2023-07
3. Advancing Database System Operators with Near-Data Processing;2022 30th Euromicro International Conference on Parallel, Distributed and Network-based Processing (PDP);2022-03
4. Database processing-in-memory;Proceedings of the VLDB Endowment;2019-11