Affiliation:
1. Michigan Technological Univ., Houghton
2. Rice Univ., Houston, TX
Abstract
Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Because computations often require more data from cache per floating-point operation than a machine can deliver and because operations are pipelined, idle computational cycles are common when scientific applications are executed. To overcome these bottlenecks, programmers have learned to use a coding style that ensures a better balance between memory references and floating-point operations. In our view, this is a step in the wrong direction because it makes programs more machine-specific. A programmer should not be required to write a new program version for each new machine; instead, the task of specializing a program to a target machine should be left to the compiler.
But is our view practical? Can a sophisticated optimizing compiler obviate the need for the myriad of programming tricks that have found their way into practice to improve the performance of the memory hierarchy? In this paper we attempt to answer that question. To do so, we develop and evaluate techniques that automatically restructure program loops to achieve high performance on specific target architectures. These methods attempt to balance computation and memory accesses and seek to eliminate or reduce pipeline interlock. To do this, they estimate statically the balance between memory operations and floating-point operations for each loop in a particular program and use these estimates to determine whether to apply various loop transformations.
Experiments with our automatic techniques show that integer-factor speedups are possible on kernels. Additionally, the estimate of the balance between memory operations and computation, and the application of the estimate are very accurate—experiments reveal little difference between the balance achieved by our automatic system that is made possible by hand optimization.
Publisher
Association for Computing Machinery (ACM)
Reference24 articles.
1. A catalogue of optimizing transformations. In Design and Opttmzzation of Compilers. Prentice-Hail, Englewood Cliffs;ALLEN F.;N.J.,1972
2. Automatic translation of FORTRAN programs to vector form
3. BRIGGS P. 1992. Register allocation via graph coloring. Ph.D. thesis Dept. of Computer Science Rice Univ. Houston Tex.]] BRIGGS P. 1992. Register allocation via graph coloring. Ph.D. thesis Dept. of Computer Science Rice Univ. Houston Tex.]]
4. Estimating interlock and improving balance for pipelined architectures
Cited by
70 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Register Blocking: An Analytical Modelling Approach for Affine Loop Kernels;Proceedings of the 21st ACM International Conference on Computing Frontiers;2024-05-07
2. Register Tiling for Unstructured Sparsity in Neural Network Inference;Proceedings of the ACM on Programming Languages;2023-06-06
3. MD-Roofline: A Training Performance Analysis Model for Distributed Deep Learning;2022 IEEE Symposium on Computers and Communications (ISCC);2022-06-30
4. Uniform lease vs. LRU cache: analysis and evaluation;Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management;2021-06-22
5. Vectorization-aware loop unrolling with seed forwarding;Proceedings of the 29th International Conference on Compiler Construction;2020-02-22