Author:
Kessler R. E.,Hill Mark D.
Abstract
When a computer system supports both paged virtual memory and large real-indexed caches, cache performance depends in part on the main memory page placement. To date, most operating systems place pages by selecting an arbitrary page frame from a pool of page frames that have been made available by the page replacement algorithm. We give a simple model that shows that this naive (arbitrary) page placement leads to up to 30% unnecessary cache conflicts. We develop several page placement algorithms, called
careful-mapping algorithms
, that try to select a page frame (from the pool of available page frames) that is likely to reduce cache contention. Using trace-driven simulation, we find that careful mapping results in 10–20% fewer (dynamic) cache misses than naive mapping (for a direct-mapped real-indexed multimegabyte cache). Thus, our results suggest that careful mapping by the operating system can get about half the cache miss reduction that a cache size (or associativity) doubling can.
Publisher
Association for Computing Machinery (ACM)
Cited by
131 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Skip It: Take Control of Your Cache!;Proceedings of the 29th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2;2024-04-27
2. LAG-based schedulability analysis for preemptive global EDF scheduling with dynamic cache allocation;Journal of Systems Architecture;2024-02
3. Brief Industry Paper: Latency-Driven Optimization of Instruction Blocks Orchestration on Memory;2023 IEEE Real-Time Systems Symposium (RTSS);2023-12-05
4. Attack of the Knights:Non Uniform Cache Side Channel Attack;Annual Computer Security Applications Conference;2023-12-04
5. LAG-Based Analysis for Preemptive Global Scheduling with Dynamic Cache Allocation;2023 IEEE 29th International Conference on Embedded and Real-Time Computing Systems and Applications (RTCSA);2023-08-30