Micro-architectural analysis of in-memory OLTP: Revisited

Author:

Sirin UtkuORCID,Tözün Pınar,Porobic Danica,Yasin Ahmad,Ailamaki Anastasia

Abstract

AbstractMicro-architectural behavior of traditional disk-based online transaction processing (OLTP) systems has been investigated extensively over the past couple of decades. Results show that traditional OLTP systems mostly under-utilize the available micro-architectural resources. In-memory OLTP systems, on the other hand, process all the data in main-memory and, therefore, can omit the buffer pool. Furthermore, they usually adopt more lightweight concurrency control mechanisms, cache-conscious data structures, and cleaner codebases since they are usually designed from scratch. Hence, we expect significant differences in micro-architectural behavior when running OLTP on platforms optimized for in-memory processing as opposed to disk-based database systems. In particular, we expect that in-memory systems exploit micro-architectural features such as instruction and data caches significantly better than disk-based systems. This paper sheds light on the micro-architectural behavior of in-memory database systems by analyzing and contrasting it to the behavior of disk-based systems when running OLTP workloads. The results show that, despite all the design changes, in-memory OLTP exhibits very similar micro-architectural behavior to disk-based OLTP: more than half of the execution time goes to memory stalls where instruction cache misses or the long-latency data misses from the last-level cache (LLC) are the dominant factors in the overall execution time. Even though ground-up designed in-memory systems can eliminate the instruction cache misses, the reduction in instruction stalls amplifies the impact of LLC data misses. As a result, only 30% of the CPU cycles are used to retire instructions, and 70% of the CPU cycles are wasted to stalls for both traditional disk-based and new generation in-memory OLTP.

Funder

EPFL Lausanne

Publisher

Springer Science and Business Media LLC

Subject

Hardware and Architecture,Information Systems

Reference61 articles.

1. TPC Transcation Processing Performance Council. http://www.tpc.org/

2. Ailamaki, A., DeWitt, D.J., Hill, M.D., Wood, D.A.: DBMSs on a Modern Processor: Where Does Time Go? In: VLDB, pp. 266–277 (1999)

3. Alonso, G., Roscoe, T., Cock, D., Ewaida, M., Kara, K., Korolija, D., Sidler, D., Wang, Z.: Tackling Hardware/Software co-design from a database perspective. In: CIDR (2020)

4. Ayers, G., Nagendra, N.P., August, D.I., Cho, H.K., Kanev, S., Kozyrakis, C., Krishnamurthy, T., Litz, H., Moseley, T., Ranganathan, P.: AsmDB: Understanding and Mitigating Front-End Stalls in Warehouse-Scale Computers. In: ISCA, pp. 462–473 (2019)

5. Barroso, L.A., Gharachorloo, K., Bugnion, E.: Memory system characterization of commercial workloads. In: ISCA, pp. 3–14 (1998)

Cited by 3 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Performance or Efficiency? A Tale of Two Cores for DB Workloads;Proceedings of the 20th International Workshop on Data Management on New Hardware;2024-06-09

2. Microarchitectural Analysis of Graph BI Queries on RDBMS;Proceedings of the 19th International Workshop on Data Management on New Hardware;2023-06-18

3. HybriDS;Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures;2022-07-11

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3