Floating-Point Quantization Analysis of Multi-Layer Perceptron Artificial Neural Networks
-
Published:2024-03-25
Issue:4-5
Volume:96
Page:301-312
-
ISSN:1939-8018
-
Container-title:Journal of Signal Processing Systems
-
language:en
-
Short-container-title:J Sign Process Syst
Author:
Al-Rikabi HusseinORCID, Renczes BalázsORCID
Abstract
AbstractThe impact of quantization in Multi-Layer Perceptron (MLP) Artificial Neural Networks (ANNs) is presented in this paper. In this architecture, the constant increase in size and the demand to decrease bit precision are two factors that contribute to the significant enlargement of quantization errors. We introduce an analytical tool that models the propagation of Quantization Noise Power (QNP) in floating-point MLP ANNs. Contrary to the state-of-the-art approach, which compares the exact and quantized data experimentally, the proposed algorithm can predict the QNP theoretically when the effect of operation quantization and Coefficient Quantization Error (CQE) are considered. This supports decisions in determining the required precision during the hardware design. The algorithm is flexible in handling MLP ANNs of user-defined parameters, such as size and type of activation function. Additionally, a simulation environment is built that can perform each operation on an adjustable bit precision. The accuracy of the QNP calculation is verified with two publicly available benchmarked datasets, using the default precision simulation environment as a reference.
Funder
Nemzeti Kutatási, Fejlesztési és Innovaciós Alap Budapest University of Technology and Economics
Publisher
Springer Science and Business Media LLC
Reference30 articles.
1. Chao, Z., & Kim, H. J. (2020). Brain image segmentation based on the hybrid of back propagation neural network and AdaBoost system. Journal of Signal Processing Systems, 92, 289–298. 2. Sahoo, M., Dey, S., Sahoo, S., Das, A., Ray, A., Nayak, S., & Subudhi, E. (2023). MLP (multi-layer perceptron) and RBF (radial basis function) neural network approach for estimating and optimizing 6-gingerol content in Zingiber officinale Rosc. in different agro-climatic conditions. Industrial Crops and Products, 198, 116658. 3. Yin, P., Wang, C., Liu, W., Swartzlander, E. E., & Lombardi, F. (2018). Designs of approximate floating-point multipliers with variable accuracy for error-tolerant applications. Journal of Signal Processing Systems, 90, 641–654. 4. Barrachina, J. A., Ren, C., Morisseau, C., Vieillard, G., & Ovarlez, J. P. (2023). Comparison between equivalent architectures of complex-valued and real-valued neural networks-application on polarimetric SAR image segmentation. Journal of Signal Processing Systems, 95(1), 57–66. 5. Huang, A., Cao, Z., Wang, C., Wen, J., Lu, F., & Xu, L. (2021). An FPGA-based on-chip neural network for TDLAS tomography in dynamic flames. IEEE Transactions on Instrumentation and Measurement, 70, 1–11.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|