Affiliation:
1. Technische Universität Wien (TU Wien), Vienna, Austria
Abstract
Deep neural networks (DNNs) have proliferated in most of the application domains that involve data processing, predictive analysis and knowledge inference. Alongside the need for developing highly performance-efficient DNN accelerators, there is an utmost need to improve the yield of the manufacturing process in order to reduce the per unit cost of the DNN accelerators. To this end, we present ‘SalvageDNN’, a methodology to enable reliable execution of DNNs on the hardware accelerators with permanent faults (typically due to imperfect manufacturing processes). It employs a fault-aware mapping of different parts of a given DNN on the hardware accelerator (subjected to faults) by leveraging the saliency of the DNN parameters and the fault map of the underlying processing hardware. We also present novel modifications in a systolic array design to further improve the yield of the accelerators while ensuring reliable DNN execution using ‘SalvageDNN’ and negligible overheads in terms of area, power/energy and performance.
This article is part of the theme issue ‘Harmonizing energy-autonomous computing and intelligence’.
Funder
Deutsche Forschungsgemeinschaft
Subject
General Physics and Astronomy,General Engineering,General Mathematics
Reference29 articles.
1. Deep learning
2. Efficient Processing of Deep Neural Networks: A Tutorial and Survey
3. Jouppi NP et al. 2017 In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual Int. Symp. Computer Architecture (ISCA) Toronto ON Canada 24–28 June 2017 pp. 1–12. New York NY: ACM.
4. Hanif MA Putra RVW Tanvir M Hafiz R Rehman S Shafique M. 2018 Mpna: a massively-parallel neural array accelerator with dataflow optimization for convolutional neural networks. (http://arxiv.org/abs/1810.12910)
5. Eyeriss v2: A Flexible Accelerator for Emerging Deep Neural Networks on Mobile Devices
Cited by
32 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献