Affiliation:
1. Cornell University
2. University of Colorado
Abstract
While caches are effective at avoiding most main-memory accesses, the few remaining memory references are still expensive. Even one cache miss per one hundred accesses can double a program's execution time. To better tolerate the data-cache miss latency, architects have proposed various speculation mechanisms, including load-value prediction. A load-value predictor guesses the result of a load so that the dependent instructions can immediately proceed without having to wait for the memory access to complete. To use the prediction resources most effectively, speculation should be restricted to loads that are likely to miss in the cache and that are likely to be predicted correctly. Prior work has considered hardware- and profile-based methods to make these decisions. Our work focuses on making these decisions at compile time. We show that a simple compiler classification is effective at separating the loads that should be speculated from the loads that should not. We present results for a number of C and Java programs and demonstrate that our results are consistent across programming languages and across program inputs.
Publisher
Association for Computing Machinery (ACM)
Subject
Computer Graphics and Computer-Aided Design,Software
Reference28 articles.
1. SPECcpu2000 benchmarks. http://www.spec.org/osg/cpu2000/CINT2000 SPECcpu2000 benchmarks. http://www.spec.org/osg/cpu2000/CINT2000
2. SPECjvm98 benchmarks. http://www.spec.org/osg/jvm98/ SPECjvm98 benchmarks. http://www.spec.org/osg/jvm98/
3. In SPECcpu95 1995 In SPECcpu95 1995
4. The Jalapeño dynamic optimizing compiler for Java
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献