Author:
LaMarca Anthony,Ladner Richard
Abstract
As memory access times grow larger relative to processor cycle
times, the cache performance of algorithms has an increasingly
large impact on overall performance. Unfortunately, most commonly
used algorithms were not designed with cache performance in mind.
This paper investigates the cache performance of implicit heaps. We
present optimizations which significantly reduce the cache misses
that heaps incur and improve their overall performance. We present
an analytical model called collective analysis that allows cache
performance to be predicted as a function of both cache
configuration and algorithm configuration. As part of our
investigation, we perform an approximate analysis of the cache
performance of both traditional heaps and our improved heaps in our
model. In addition empirical data is given for five architectures
to show the impact our optimizations have on overall performance.
We also revisit a priority queue study originally performed by
Jones [25]. Due to the increases in cache miss penalties, the
relative performance results we obtain on today's machines differ
greatly from the machines of only ten years ago. We compare the
performance of implicit heaps, skew heaps and splay trees and
discuss the difference between our results and Jones's.
Publisher
Association for Computing Machinery (ACM)
Subject
Theoretical Computer Science
Cited by
39 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献