Affiliation:
1. University of Michigan, Ann Arbor, MI, USA
2. Meta, Inc., Menlo Park, CA, USA
Abstract
High-performance flash-based key-value stores in data-centers utilize large amounts of DRAM to cache hot data. However, motivated by the high cost and power consumption of DRAM, server designs with lower DRAM-per-compute ratio are becoming popular. These low-cost servers enable scale-out services by reducing server workload densities. This results in improvements to overall service reliability, leading to a decrease in the total cost of ownership (TCO) for scalable workloads. Nevertheless, for key-value stores with large memory footprints, these reduced DRAM servers degrade performance due to an increase in both IO utilization and data access latency. In this scenario, a standard practice to improve performance for sharded databases is to reduce the number of shards per machine, which degrades the TCO benefits of reduced DRAM low-cost servers. In this work, we explore a practical solution to improve performance and reduce the costs and power consumption of key-value stores running on DRAM-constrained servers by using Storage Class Memories (SCM).
SCMs in a DIMM form factor, although slower than DRAM, are sufficiently faster than flash when serving as a large extension to DRAM. With new technologies like Compute Express Link, we can expand the memory capacity of servers with high bandwidth and low latency connectivity with SCM. In this article, we use Intel Optane PMem 100 Series SCMs (DCPMM) in AppDirect mode to extend the available memory of our existing single-socket platform deployment of RocksDB (one of the largest key-value stores at Meta). We first designed a hybrid cache in RocksDB to harness both DRAM and SCM hierarchically. We then characterized the performance of the hybrid cache for three of the largest RocksDB use cases at Meta (ChatApp, BLOB Metadata, and Hive Cache). Our results demonstrate that we can achieve up to 80% improvement in throughput and 20% improvement in P95 latency over the existing small DRAM single-socket platform, while maintaining a 43–48% cost improvement over our large DRAM dual-socket platform. To the best of our knowledge, this is the first study of the DCPMM platform in a commercial data center.
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture
Reference89 articles.
1. CXL. 2022. Compute express link: The breakthrough CPU-to-device interconnect. Retrieved from https://www.computeexpresslink.org/.
2. Memkind. 2022. memkind library. Retrieved from https://github.com/memkind/memkind.
3. NDCTL. 2022. NDCTL and DAXCTL. Retrieved from https://github.com/pmem/ndctl.
4. NDCTL. 2022. NDCTL user guide: Managing namespaces. Retrieved from https://docs.pmem.io/ndctl-user-guide/managing-namespaces.
5. J. Paul Alcorn. 2019. Intel optane DIMM pricing. Retrieved from https://www.tomshardware.com/news/intel-optane-dimm-pricing-performance 39007.html.
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. DiStore: A Fully Memory Disaggregation Friendly Key-Value Store with Improved Tail Latency and Space Efficiency;Proceedings of the 53rd International Conference on Parallel Processing;2024-08-12
2. Can Modern LLMs Tune and Configure LSM-based Key-Value Stores?;Proceedings of the 16th ACM Workshop on Hot Topics in Storage and File Systems;2024-07-08
3. Can ZNS SSDs be Better Storage Devices for Persistent Cache?;Proceedings of the 16th ACM Workshop on Hot Topics in Storage and File Systems;2024-07-08
4. CaaS-LSM: Compaction-as-a-Service for LSM-based Key-Value Stores in Storage Disaggregated Infrastructure;Proceedings of the ACM on Management of Data;2024-05-29
5. Research on High-Performance Framework of Big Data Acquisition, Storage and Application for Warfare Simulation;2024 IEEE 4th International Conference on Electronic Technology, Communication and Information (ICETCI);2024-05-24