Affiliation:
1. Jilin University, Qianjin Street, Changchun, Jilin, China
2. Jilin University, Nanhu Street, Changchun, Jilin, China
Abstract
Graphics Processing Units (GPUs) are widely used in general-purpose high-performance computing fields due to their highly parallel architecture. In recent years, a new era with the nanometer scale integrated circuit manufacture process has come. As a consequence, GPUs’ computation capability gets even stronger. However, as process technology scales down, hardware variability, e.g., process variations (PVs) and negative bias temperature instability (NBTI), has a higher impact on the chip quality. The parallelism of GPU desires high consistency of hardware units on chip; otherwise, the worst unit will inevitably become the bottleneck. So the hardware variability becomes a pressing concern to further improve GPUs’ performance and lifetime, not only in integrated circuit fabrication, but more in GPU architecture design.
Streaming Processors (SPs) are the key units in GPUs, which perform most of parallel computing operations. Therefore, in this work, we focus on mitigating the impact of hardware variability in GPU SPs. We first model and analyze SPs’ performance variations under hardware variability. Then, we observe that both PV and NBTI have a large impact on SPs’ performance. We further observe unbalanced SP utilization, e.g., some SPs are idle when others are active, during program execution. Leveraging this observation, we propose a Hardware Variability-aware SPs’ Management policy (HVSM), which dynamically dispatches computation in appropriate SPs to balance the utilizations. In addition, we find that a large portion of compute operations are duplicate. We also propose an Operation Compression (OC) technique to minimize the unnecessary computations to further mitigate the hardware variability effects. Our experimental results show the combined HVSM and OC technique effectively reduces the impact of hardware variability, which can translate to 37% performance improvement or 18.3% lifetime extension for a GPU chip.
Funder
National Natural Science Foundation of China
Science and Technology Development Program of Jilin Province
Publisher
Association for Computing Machinery (ACM)
Subject
Electrical and Electronic Engineering,Computer Graphics and Computer-Aided Design,Computer Science Applications
Reference33 articles.
1. {n.d.}. NVIDIA CUDA SDK. Retrieved from https://developer.nvidia.com/cuda-downloads. {n.d.}. NVIDIA CUDA SDK. Retrieved from https://developer.nvidia.com/cuda-downloads.
2. {n.d.}. NVIDIA’s Next Generation CUDA Computer Architecture: Fermi. Retrieved from http://www.nvidia.com/content/pdf/fermi-white-papers/nvidia-fermi-compute-architecture-whitepaper.pdf. {n.d.}. NVIDIA’s Next Generation CUDA Computer Architecture: Fermi. Retrieved from http://www.nvidia.com/content/pdf/fermi-white-papers/nvidia-fermi-compute-architecture-whitepaper.pdf.
3. {n.d.}. NVIDIA’s Next Generation CUDA Computer Architecture: Kepler. Retrieved from http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf. {n.d.}. NVIDIA’s Next Generation CUDA Computer Architecture: Kepler. Retrieved from http://www.nvidia.com/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf.
4. {n.d.}. Parboil Benchmark Suite. Retrieved from https://github.com/abduld/Parboil. {n.d.}. Parboil Benchmark Suite. Retrieved from https://github.com/abduld/Parboil.
5. Warped gates