Affiliation:
1. National Key Laboratory of Electromagnetic Energy Naval University of Engineering Wuhan China
Abstract
SummaryThis article proposes a novel angle representation method for floating‐point number, which eliminates the need for DSP (Digital Signal Processor) resources and reduces the resource usage when performing floating‐point multiplication. Compared with the implementation of floating‐point multiplier using IP cores, the implementation of approximate multiplication based on lookup tables can achieve an average reduction of 58.2% in LUTs (look‐up tables) and an average increase of 20.4% in frequency for mantissa widths ranging from 3 to 12 bits. Additionally, it can also save an average of 23.2% in registers. Analysis of PDP (power‐delay product)/LUT to MRED (mean relative error distance)/PRED (probability of relative error distance) among other approximate multipliers shows that the proposed design extends the Pareto front. At last, simulation of a three‐level inverter is implemented to verify the effectiveness of the multiplier.
Funder
National Key Laboratory Foundation of China
National Natural Science Foundation of China
Reference19 articles.
1. Efficient Approximate Posit Multipliers for Deep Learning Computation
2. Xilinx.Higher performance neural networks with small floating point.2021. Accessed: Dec. 13 2022. [Online]. Available:https://xilinx.eetrend.com/files/2021-06/wen_zhang_/100113810-209893-wp530-small-floating-point.pdf
3. Small Logarithmic Floating-Point Multiplier Based on FPGA and Its Application on MobileNet