Affiliation:
1. MIT CSAIL, Cambridge, MA
2. Carnegie Mellon University, Pittsburgh, PA
Abstract
Instruction-cache misses account for up to 40% of execution time in online transaction processing (OLTP) database workloads. In contrast to data cache misses, instruction misses cannot be overlapped with out-of-order execution. Chip design limitations do not allow increases in the size or associativity of instruction caches that would help reduce misses. On the contrary, the effective instruction cache size is expected to further decrease with the adoption of multicore and multithreading chip designs (multiple on-chip processor cores and multiple simultaneous threads per core). Different concurrent database threads, however, execute similar instruction sequences over their lifetime, too long to be captured and exploited in hardware. The challenge, from a software designer's point of view, is to identify and exploit common code paths across threads executing arbitrary operations, thereby eliminating extraneous instruction misses.In this article, we describe
Synchronized Threads through Explicit Processor Scheduling
(STEPS), a methodology and tool to increase instruction locality in database servers executing transaction processing workloads. STEPS works at two levels to increase reusability of instructions brought in the cache. At a higher level, synchronization barriers form
teams
of threads that execute the same system component. Within a team, STEPS schedules special fast context-switches at very fine granularity to reuse sets of instructions across team members. To find points in the code where context-switches should occur, we develop
autoSTEPS
, a code profiling tool that runs directly on the DBMS binary. STEPS can minimize both capacity and conflict instruction cache misses for arbitrarily long code paths.We demonstrate the effectiveness of our approach on
Shore
, a research prototype database system shown to be governed by similar bottlenecks as commercial systems. Using microbenchmarks on real and simulated processors, we observe that STEPS eliminates up to 96% of instruction-cache misses for each additional team thread and at the same time eliminates up to 64% of mispredicted branches by providing a repetitive execution pattern to the processor. When performing a full-system evaluation on real hardware using TPC-C, the industry-standard transactional benchmark, STEPS eliminates two-thirds of instruction-cache misses and provides up to 1.4 overall speedup.
Publisher
Association for Computing Machinery (ACM)
Reference30 articles.
1. Ailamaki A. DeWitt D. J. Hill M. D. and Skounakis M. 2001a. Weaving relations for cache performance. In VLDB ‘01: Proceedings of the 27th International Conference on Very Large Data Bases. Morgan Kaufmann San Francisco CA 169--180. Ailamaki A. DeWitt D. J. Hill M. D. and Skounakis M. 2001a. Weaving relations for cache performance. In VLDB ‘01: Proceedings of the 27th International Conference on Very Large Data Bases. Morgan Kaufmann San Francisco CA 169--180.
2. Call graph prefetching for database applications
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. BriskStream;Proceedings of the 2019 International Conference on Management of Data;2019-06-25
2. Characterizing Resource Sensitivity of Database Workloads;2018 IEEE International Symposium on High Performance Computer Architecture (HPCA);2018-02
3. Characterizing OS Behaviors of Datacenter and Big Data Workloads;2016
4. STREX;ACM SIGARCH Computer Architecture News;2013-06-26
5. STREX;Proceedings of the 40th Annual International Symposium on Computer Architecture;2013-06-23