Affiliation:
1. Department of Electronics Engineering and Telecommunications, Faculty of Engineering, State University of Rio de Janeiro, Brazil
2. Department of Systems Engineering and Computation, Faculty of Engineering, State University of Rio de Janeiro, Brazil
Abstract
In this paper, we devise an adaptive hardware architecture for ANNs that takes advantage of the dedicated adder blocks, commonly called MACs, to compute both the weighted sum and activation function. The proposed architecture requires a reduced silicon area considering the fact that the MACs come for free as these are FPGA's built-in hardcores and if not used, they cannot be optimized in the final design. The implementation uses integer fixed point arithmetic and operates with fractions to represent real numbers. The hardware is fast because it is massively parallel, yet it is compact as it has a single physical layer of neurons while the remaining are virtual. Besides, the proposed architecture is adaptive; so it is designed to adjust itself on-the-fly to the user-defined configuration of the neural network, i.e., the number of layers and neurons per layer as well as the topology of the ANN can be configured with no extra hardware changes nor any supplementary design effort.
Publisher
World Scientific Pub Co Pte Lt
Subject
Electrical and Electronic Engineering,Hardware and Architecture,Electrical and Electronic Engineering,Hardware and Architecture