Affiliation:
1. State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences
2. State Key Laboratory of Computer Architecture, Institute of Computing Technology, Chinese Academy of Sciences; University of Chinese Academy of Sciences; Peng Cheng Laboratory
Abstract
Recently the development of deep learning has been propelling the sheer growth of vision and speech applications on lightweight embedded and mobile systems. However, the limitation of computation resource and power delivery capability in embedded platforms is recognized as a significant bottleneck that prevents the systems from providing real-time deep learning ability, since the inference of deep convolutional neural networks (CNNs) and recurrent neural networks (RNNs) involves large quantities of weights and operations. Particularly, how to provide quality-of-services (QoS)-guaranteed neural network inference ability in the multitask execution environment of multicore SoCs is even more complicated due to the existence of resource contention. In this article, we present a novel deep neural network architecture, MV-Net, which provides performance elasticity and contention-aware self-scheduling ability for QoS enhancement in mobile computing systems. When the constraints of QoS, output accuracy, and resource contention status of the system change, MV-Net can dynamically reconfigure the corresponding neural network propagation paths and thus achieves an effective tradeoff between neural network computational complexity and prediction accuracy via approximate computing. The experimental results show that (1) MV-Net significantly improves the performance flexibility of current CNN models and makes it possible to provide always-guaranteed QoS in a multitask environment, and (2) it satisfies the quality-of-results (QoR) requirement, outperforming the baseline implementation significantly, and improves the system energy efficiency at the same time.
Funder
National Natural Science Foundation of China
YESS hip program
Publisher
Association for Computing Machinery (ACM)
Subject
Electrical and Electronic Engineering,Hardware and Architecture,Software
Reference51 articles.
1. Skip RNN: Learning to skip state updates in recurrent neural networks;Campos Víctor;Arxiv Preprint Arxiv,2017
2. Rodinia: A benchmark suite for heterogeneous computing
3. BING: Binarized Normed Gradients for Objectness Estimation at 300fps
4. An Exploration of Parameter Redundancy in Deep Networks with Circulant Projections
5. Arm Cortex. A57 MPCore processor technical reference manual infocenter. arm. com arithmetic. Logical Unit Advanced SIMD Micro-Operation Vector Floating Point. Arm Cortex. A57 MPCore processor technical reference manual infocenter. arm. com arithmetic. Logical Unit Advanced SIMD Micro-Operation Vector Floating Point.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献