Unified Holistic Memory Management Supporting Multiple Big Data Processing Frameworks over Hybrid Memories

Author:

Chen Lei1ORCID,Zhao Jiacheng1,Wang Chenxi2ORCID,Cao Ting3,Zigman John4,Volos Haris5ORCID,Mutlu Onur6,Lv Fang1,Feng Xiaobing1,Xu Guoqing Harry2,Cui Huimin1

Affiliation:

1. SKL Computer Architecture, ICT, CAS, Beijing, China and University of Chinese Academy of Sciences, Beijing, China

2. University of California, Los Angeles, California, USA

3. Microsoft Research, China

4. The University of Sydney, Australia

5. University of Cyprus, Cyprus

6. ETH Zürich, Switzerland

Abstract

To process real-world datasets, modern data-parallel systems often require extremely large amounts of memory, which are both costly and energy inefficient. Emerging non-volatile memory (NVM) technologies offer high capacity compared to DRAM and low energy compared to SSDs. Hence, NVMs have the potential to fundamentally change the dichotomy between DRAM and durable storage in Big Data processing. However, most Big Data applications are written in managed languages and executed on top of a managed runtime that already performs various dimensions of memory management. Supporting hybrid physical memories adds a new dimension, creating unique challenges in data replacement. This article proposes Panthera, a semantics-aware, fully automated memory management technique for Big Data processing over hybrid memories. Panthera analyzes user programs on a Big Data system to infer their coarse-grained access patterns, which are then passed to the Panthera runtime for efficient data placement and migration. For Big Data applications, the coarse-grained data division information is accurate enough to guide the GC for data layout, which hardly incurs overhead in data monitoring and moving. We implemented Panthera in OpenJDK and Apache Spark. Based on Big Data applications’ memory access pattern, we also implemented a new profiling-guided optimization strategy, which is transparent to applications. With this optimization, our extensive evaluation demonstrates that Panthera reduces energy by 32–53% at less than 1% time overhead on average. To show Panthera’s applicability, we extend it to QuickCached, a pure Java implementation of Memcached. Our evaluation results show that Panthera reduces energy by 28.7% at 5.2% time overhead on average.

Funder

National Natural Science Foundation of China

US National Science Foundation

ONR

Publisher

Association for Computing Machinery (ACM)

Subject

General Computer Science

Reference96 articles.

1. 2012. LIBSVM Data: Classification. https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets.

2. 2017. Notre dame network dataset. http://konect.uni-koblenz.de/networks/web-NotreDame.

3. 2017. QuickCached. https://github.com/QuickServerLab/QuickCached.

4. 2017. Wikipedia links network dataset. http://konect.uni-koblenz.de/networks.

5. 2018. 3D XPoint \( ^{TM} \) : A Breakthrough in Non-Volatile Memory Technology. https://www.intel.com/content/www/us/en/architecture-and-technology/intel-micron-3d-xpoint-webcast.html.

Cited by 4 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. L/STIM: A Framework for Detecting Multi-Stage Cyber Attacks;2024 International Russian Smart Industry Conference (SmartIndustryCon);2024-03-25

2. Reinvent Cloud Software Stacks for Resource Disaggregation;Journal of Computer Science and Technology;2023-09

3. Challenges and future directions for energy, latency, and lifetime improvements in NVMs;Distributed and Parallel Databases;2022-09-21

4. Performance Evaluation Analysis of Spark Streaming Backpressure for Data-Intensive Pipelines;Sensors;2022-06-23

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3