Abstract
Energy optimization is an increasingly important aspect of today’s high-performance computing applications. In particular, dynamic voltage and frequency scaling (DVFS) has become a widely adopted solution to balance performance and energy consumption, and hardware vendors provide management libraries that allow the programmer to change both memory and core frequencies manually to minimize energy consumption while maximizing performance. This article focuses on modeling the energy consumption and speedup of GPU applications while using different frequency configurations. The task is not straightforward, because of the large set of possible and uniformly distributed configurations and because of the multi-objective nature of the problem, which minimizes energy consumption and maximizes performance. This article proposes a machine learning-based method to predict the best core and memory frequency configurations on GPUs for an input OpenCL kernel. The method is based on two models for speedup and normalized energy predictions over the default frequency configuration. Those are later combined into a multi-objective approach that predicts a Pareto-set of frequency configurations. Results show that our approach is very accurate at predicting extema and the Pareto set, and finds frequency configurations that dominate the default configuration in either energy or performance.
Funder
China Scholarship Council
Subject
Applied Mathematics,Modelling and Simulation,General Computer Science,Theoretical Computer Science
Reference38 articles.
1. RAPL (Running Average Power Limit) Power Meter
https://01.org/rapl-power-meter
2. NVIDIA Management Library (NVML)
https://developer.nvidia.com/nvidia-management-library-nvml
3. Evaluation of DVFS techniques on modern HPC processors and accelerators for energy-aware applications
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. An OP-TEE Energy-Efficient Task Scheduling Approach Based on Mobile Application Characteristics;Intelligent Automation & Soft Computing;2023
2. Going green: optimizing GPUs for energy efficiency through model-steered auto-tuning;2022 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS);2022-11
3. Decoupling GPGPU voltage-frequency scaling for deep-learning applications;Journal of Parallel and Distributed Computing;2022-07
4. Exploiting Non-conventional DVFS on GPUs: Application to Deep Learning;2020 IEEE 32nd International Symposium on Computer Architecture and High Performance Computing (SBAC-PAD);2020-09