FPGA-Based BNN Architecture in Time Domain with Low Storage and Power Consumption

Author:

Zhang Longlong,Tang Xuebin,Hu XiangORCID,Zhou Tong,Peng Yuanxi

Abstract

With the increasing demand for convolutional neural networks (CNNs) in many edge computing scenarios and resource-limited settings, researchers have made efforts to apply lightweight neural networks on hardware platforms. While binarized neural networks (BNNs) perform excellently in such tasks, many implementations still face challenges such as an imbalance between accuracy and computational complexity, as well as the requirement for low power and storage consumption. This paper first proposes a novel binary convolution structure based on the time domain to reduce resource and power consumption for the convolution process. Furthermore, through the joint design of binary convolution, batch normalization, and activation function in the time domain, we propose a full-BNN model and hardware architecture (Model I), which keeps the values of all intermediate results as binary (1 bit) to reduce storage requirements by 75%. At the same time, we propose a mixed-precision BNN structure (model II) based on the sensitivity of different layers of the network to the calculation accuracy; that is, the layer sensitive to the classification result uses fixed-point data, and the other layers use binary data in the time domain. This can achieve a balance between accuracy and computing resources. Lastly, we take the MNIST dataset as an example to test the above two models on the field-programmable gate array (FPGA) platform. The results show that the two models can be used as neural network acceleration units with low storage requirements and low power consumption for classification tasks under the condition that the accuracy decline is small. The joint design method in the time domain may further inspire other computing architectures. In addition, the design of Model II has certain reference significance for the design of more complex classification tasks.

Funder

The National Natural Science Foundation of China

the National 565 University of Defense Technology Foundation

Publisher

MDPI AG

Subject

Electrical and Electronic Engineering,Computer Networks and Communications,Hardware and Architecture,Signal Processing,Control and Systems Engineering

Reference35 articles.

1. A high performance fpga-based accelerator for large-scale convolutional neural networks;Li;Proceedings of the 26th International Conference on Field Programmable Logic and Applications (FPL),2016

2. A Simplified Speaker Recognition System Based on FPGA Platform

3. A Scalable System-on-Chip Acceleration for Deep Neural Networks

4. Edge Computing: Vision and Challenges

5. A Novel Memory-Scheduling Strategy for Large Convolutional Neural Network on Memory-Limited Devices

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3