Affiliation:
1. Wuhan National Lab for Optoelectronics and School of Computer, Huazhong University of Science and Technology, Wuhan, China
2. Data Storage Institute, A*STAR (Agency for Science, Technology and Research), Innovis, Singapore
Abstract
For years, the increasing popularity of flash memory has been changing storage systems. Flash-based solid-state drives (SSDs) are widely used as a new cache tier on top of hard disk drives (HDDs) to speed up data-intensive applications. However, the endurance problem of flash memory remains a concern and is getting worse with the adoption of MLC and TLC flash. In this article, we propose a novel cache management algorithm for flash-based disk cache named Lazy Adaptive Replacement Cache (LARC). LARC adopts the idea of selective caching to filter out seldom accessed blocks and prevent them from entering cache. This avoids cache pollution and preserves popular blocks in cache for a longer period of time, leading to a higher hit rate. Meanwhile, by avoiding unnecessary cache replacements, LARC reduces the volume of data written to the SSD and yields an SSD-friendly access pattern. In this way, LARC improves the performance and endurance of the SSD at the same time. LARC is self-tuning and incurs little overhead. It has been extensively evaluated by both trace-driven simulations and synthetic benchmarks on a prototype implementation. Our experiments show that LARC outperforms state-of-art algorithms for different kinds of workloads and extends SSD lifetime by up to 15.7 times.
Funder
Technology and Research (A*STAR), Singapore
National Basic Research Program of China
Agency for Science
National High Technology Research and Development Program (“863” Program) of China
Natural Science Foundation of Hubei Province
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Hardware and Architecture
Cited by
54 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. BERT4Cache: a bidirectional encoder representations for data prefetching in cache;PeerJ Computer Science;2024-08-29
2. OptimusPrime: Unleash Dataplane Programmability through a Transformable Architecture;Proceedings of the ACM SIGCOMM 2024 Conference;2024-08-04
3. Desafios na Gerência de Cache Web Multi-locatários;Anais Estendidos do XLII Simpósio Brasileiro de Redes de Computadores e Sistemas Distribuídos (SBRC 2024);2024-05-20
4. No Clash on Cache: Observations from a Multi-tenant Ecommerce Platform;Proceedings of the 15th ACM/SPEC International Conference on Performance Engineering;2024-05-07
5. SLAP: Segmented Reuse-Time-Label Based Admission Policy for Content Delivery Network Caching;ACM Transactions on Architecture and Code Optimization;2024-03-23