Author:
Alsuhli Ghada,Sakellariou Vasilis,Saleh Hani,Al-Qutayri Mahmoud,Mohammad Baker,Stouraitis Thanos
Publisher
Springer Nature Switzerland
Reference28 articles.
1. Sakai, Y.: Quantizaiton for deep neural network training with 8-bit dynamic fixed point. In: 2020 7th International Conference on Soft Computing & Machine Intelligence (ISCMI), pp. 126–130. IEEE (2020)
2. Jo, S., Park, H., Lee, G., Choi, K.: Training neural networks with low precision dynamic fixed-point. In: 2018 IEEE 36th International Conference on Computer Design (ICCD), pp. 405–408. IEEE (2018)
3. Das, D., Mellempudi, N., Mudigere, D., Kalamkar, D., Avancha, S., Banerjee, K., Sridharan, S., Vaidyanathan, K., Kaul, B., Georganas, E., et al.: Mixed precision training of convolutional neural networks using integer operations (2018). arXiv:1802.00930
4. Wu, Y.C., Huang, C.T.: Efficient dynamic fixed-point quantization of CNN inference accelerators for edge devices. In: 2019 International Symposium on VLSI Design, Automation and Test (VLSI-DAT), pp. 1–4. IEEE (2019)
5. de Prado, M., Denna, M., Benini, L., Pazos, N.: QUENN: Quantization engine for low-power neural networks. In: Proceedings of the 15th ACM International Conference on Computing Frontiers, pp. 36–44 (2018)