1. SQNR Analysis and Classification Accuracy of the 24-bit Floating Point Representation of the Laplacian Data Source Applied for Quantization of Weights of a Multilayer Perceptron;din?i?;SAUM Conference Proceedings,2020
2. Floating Point and Fixed Point 32-bits Quantizers for Quantization of Weights of Neural Networks;peri?;ATEE Conference Proceedings,2021
3. Optimization of the 24-bit Fixed Point Quantizer for Laplacian Source;peri?;Mathematics,2023
4. Bfloat16 Processing for Neural Networks