Improving the ratio of memory operations to floating-point operations in loops

Author:

Carr Steve1,Kennedy Ken2

Affiliation:

1. Michigan Technological Univ., Houghton

2. Rice Univ., Houston, TX

Abstract

Over the past decade, microprocessor design strategies have focused on increasing the computational power on a single chip. Because computations often require more data from cache per floating-point operation than a machine can deliver and because operations are pipelined, idle computational cycles are common when scientific applications are executed. To overcome these bottlenecks, programmers have learned to use a coding style that ensures a better balance between memory references and floating-point operations. In our view, this is a step in the wrong direction because it makes programs more machine-specific. A programmer should not be required to write a new program version for each new machine; instead, the task of specializing a program to a target machine should be left to the compiler. But is our view practical? Can a sophisticated optimizing compiler obviate the need for the myriad of programming tricks that have found their way into practice to improve the performance of the memory hierarchy? In this paper we attempt to answer that question. To do so, we develop and evaluate techniques that automatically restructure program loops to achieve high performance on specific target architectures. These methods attempt to balance computation and memory accesses and seek to eliminate or reduce pipeline interlock. To do this, they estimate statically the balance between memory operations and floating-point operations for each loop in a particular program and use these estimates to determine whether to apply various loop transformations. Experiments with our automatic techniques show that integer-factor speedups are possible on kernels. Additionally, the estimate of the balance between memory operations and computation, and the application of the estimate are very accurate—experiments reveal little difference between the balance achieved by our automatic system that is made possible by hand optimization.

Publisher

Association for Computing Machinery (ACM)

Subject

Software

Reference24 articles.

1. A catalogue of optimizing transformations. In Design and Opttmzzation of Compilers. Prentice-Hail, Englewood Cliffs;ALLEN F.;N.J.,1972

2. Automatic translation of FORTRAN programs to vector form

3. BRIGGS P. 1992. Register allocation via graph coloring. Ph.D. thesis Dept. of Computer Science Rice Univ. Houston Tex.]] BRIGGS P. 1992. Register allocation via graph coloring. Ph.D. thesis Dept. of Computer Science Rice Univ. Houston Tex.]]

4. Estimating interlock and improving balance for pipelined architectures

Cited by 70 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Register Blocking: An Analytical Modelling Approach for Affine Loop Kernels;Proceedings of the 21st ACM International Conference on Computing Frontiers;2024-05-07

2. Register Tiling for Unstructured Sparsity in Neural Network Inference;Proceedings of the ACM on Programming Languages;2023-06-06

3. MD-Roofline: A Training Performance Analysis Model for Distributed Deep Learning;2022 IEEE Symposium on Computers and Communications (ISCC);2022-06-30

4. Uniform lease vs. LRU cache: analysis and evaluation;Proceedings of the 2021 ACM SIGPLAN International Symposium on Memory Management;2021-06-22

5. Vectorization-aware loop unrolling with seed forwarding;Proceedings of the 29th International Conference on Compiler Construction;2020-02-22

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3