Affiliation:
1. University of Minnesota, Minneapolis, MN
2. Intel Corporation, Hillsboro, OR
3. University of Michigan-Shanghai Jiao Tong University Joint Institute, Shanghai, China
Abstract
Computations based on stochastic bit streams have several advantages compared to deterministic binary radix computations, including low power consumption, low hardware cost, high fault tolerance, and skew tolerance. To take advantage of this computing technique, previous work proposed a combinational logic-based reconfigurable architecture to perform complex arithmetic operations on stochastic streams of bits. The long execution time and the cost of converting between binary and stochastic representations, however, make the stochastic architectures less energy efficient than the deterministic binary implementations. This article introduces a methodology for synthesizing a given target function stochastically using finite-state machines (FSMs), and enhances and extends the reconfigurable architecture using sequential logic. Compared to the previous approach, the proposed reconfigurable architecture can save hardware area and energy consumption by up to 30% and 40%, respectively, while achieving a higher processing speed. Both stochastic reconfigurable architectures are much more tolerant of soft errors (bit flips) than the deterministic binary radix implementations, and their fault tolerance scales gracefully to very large numbers of errors.
Funder
National Science Foundation
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Electrical and Electronic Engineering,Hardware and Architecture,Software
Reference36 articles.
1. Survey of Stochastic Computing
2. Stochastic circuits for real-time image-processing applications
3. A. Ardakani F. Leduc-Primeau N. Onizawa T. Hanyu and W. J. Gross. 2015. VLSI Implementation of deep neural network using integral stochastic computing. CoRR abs/1509.08972 (2015). http://arxiv.org/abs/1509.08972 A. Ardakani F. Leduc-Primeau N. Onizawa T. Hanyu and W. J. Gross. 2015. VLSI Implementation of deep neural network using integral stochastic computing. CoRR abs/1509.08972 (2015). http://arxiv.org/abs/1509.08972
4. Stochastic neural computation. I. Computational elements
5. Stochastic neural computation. II. Soft competitive learning
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献