Affiliation:
1. Microsoft Research, Redmond, WA
Abstract
LLAMA is a subsystem designed for new hardware environments that supports an API for page-oriented access methods, providing both cache and storage management. Caching (CL) and storage (SL) layers use a common mapping table that separates a page's logical and physical location. CL supports data updates and management updates (e.g., for index re-organization) via latch-free compare-and-swap atomic state changes on its mapping table. SL uses the same mapping table to cope with page location changes produced by log structuring on every page flush. To demonstrate LLAMA's suitability, we tailored our latch-free Bw-tree implementation to use LLAMA. The Bw-tree is a B-tree style index. Layered on LLAMA, it has higher performance and scalability using real workloads compared with BerkeleyDB's B-tree, which is known for good performance.
Subject
General Earth and Planetary Sciences,Water Science and Technology,Geography, Planning and Development
Cited by
40 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. LETUS: A Log-Structured Efficient Trusted Universal BlockChain Storage;Companion of the 2024 International Conference on Management of Data;2024-06-09
2. Bwe-tree: An Evolution of Bw-tree on Fast Storage;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13
3. IndeXY: A Framework for Constructing Indexes Larger than Memory;2024 IEEE 40th International Conference on Data Engineering (ICDE);2024-05-13
4. Optimizing Data Retrieval from Secondary Storage with a Proactive Intermediate Cache;SoutheastCon 2024;2024-03-15
5. The Design and Implementation of UniKV for Mixed Key-Value Storage Workloads;IEEE Transactions on Knowledge and Data Engineering;2023-11-01