Affiliation:
1. Alternative Computing Technologies (ACT) Lab, Georgia Institute of Technology
Abstract
Conventionally, an approximate accelerator replaces every invocation of a frequently executed region of code without considering the final quality degradation. However, there is a vast decision space in which each invocation can either be delegated to the accelerator---improving performance and efficiency--or run on the precise core---maintaining quality. In this paper we introduce M
ithra
, a co-designed hardware-software solution, that navigates these tradeoffs to deliver high performance and efficiency while lowering the final quality loss. M
ithra
seeks to identify whether each individual accelerator invocation will lead to an undesirable quality loss and, if so, directs the processor to run the original precise code.
This identification is cast as a binary classification task that requires a cohesive co-design of hardware and software. The hardware component performs the classification at runtime and exposes a knob to the software mechanism to control quality tradeoffs. The software tunes this knob by solving a
statistical optimization problem
that maximizes benefits from approximation while providing statistical guarantees that final quality level will be met with high confidence. The software uses this knob to tune and train the hardware classifiers. We devise two distinct hardware classifiers, one table-based and one neural network based. To understand the efficacy of these mechanisms, we compare them with an ideal, but infeasible design,
the oracle.
Results show that, with 95% confidence the table-based design can restrict the final output quality loss to 5% for 90% of unseen input sets while providing 2.5× speedup and 2.6× energy efficiency. The neural design shows similar speedup however, improves the efficiency by 13%. Compared to the table-based design, the oracle improves speedup by 26% and efficiency by 36%. These results show that M
ithra
performs within a close range of the oracle and can effectively navigate the quality tradeoffs in approximate acceleration.
Publisher
Association for Computing Machinery (ACM)
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. AutoConstruct: Automated Neural Surrogate Model Building and Deployment for HPC Applications;Proceedings of the 13th Workshop on AI and Scientific Computing at Scale using Flexible Computing;2023-08-10
2. Auto-HPCnet: An Automatic Framework to Build Neural Network-based Surrogate for High-Performance Computing Applications;Proceedings of the 32nd International Symposium on High-Performance Parallel and Distributed Computing;2023-08-07
3. Towards Fine-Grained Online Adaptive Approximation Control for Dense SLAM on Embedded GPUs;ACM Transactions on Design Automation of Electronic Systems;2022-03-31
4. MIPAC;Proceedings of the 26th Asia and South Pacific Design Automation Conference;2021-01-18
5. Intelligent Management of Mobile Systems Through Computational Self-Awareness;Advances in Systems Analysis, Software Engineering, and High Performance Computing;2021