Abstract
Convolutional Neural Network (CNN) has been widely implemented to conduct artificial intelligence (AI) projects. Because of its various advantages, CNN plays a significant role in daily applications of AI like automatic driving, semantic segmentation, face identification etc. However, as the application scope expands, many conventional CNN modules met their obstacles, most essential of which is the high memory consumption and abundant computation source expense. For edge computation areas like embedded system and other source or power supply limited devises, many of the CNN modules are unacceptable. From another point of view, not all the time does CNN need to do full precision calculation. To explore more possibility of CNN, low-precision computation and binary neural network (BNN) which occupy less resources than traditional CNN are proposed. The central computation of CNN is multiply-accumulates (MACs), so any advanced architecture designs should base on the acceleration of MAC. Benefiting from customizable data width, reconfigurable logic resources and low power consumption, FPGA is an appealing platform to implement low-precision computation and BNN. The development history and applications of CNN and its limitations was introduced. Comparison was made of different platforms to implement CNN and specific designs of low-precision computation conducted on FPGA. The contemporary accelerating platforms for FPGA which are technically designed to decrease the energy consumption, latency and facilitate performance were reviewed.
Publisher
Darcy & Roy Press Co. Ltd.
Reference14 articles.
1. Lecun Y, Bottou L. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 1998, 86(11):2278-2324.
2. Krizhevsky A, Sutskever I, Hinton G. ImageNet Classification with Deep Convolutional Neural Networks. Advances in neural information processing systems, 2012, 25(2).
3. Long J, Shelhamer E, Darrell T. Fully Convolutional Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2015, 39(4):640-651.
4. Hinton G E, Osindero S, Teh Y W. A Fast Learning Algorithm for Deep Belief Nets. Neural Computation, 2006, 18(7):1527-1554.
5. HINTON GE, SALAKHUTDINOV RR. Reducing the dimensionality of data with neural networks. Science,2006,313(5786):504-507.