Author:
Xie Zhen,Raskar Siddhisanket,Emani Murali,Vishwanath Venkatram
Publisher
Springer Nature Switzerland
Reference34 articles.
1. Blinn, J.F.: Floating-point tricks. IEEE Comput. Graphics Appl. 17(4), 80–84 (1997)
2. Burgess, N., Milanovic, J., Stephens, N., Monachopoulos, K., Mansell, D.: BFloat16 processing for neural networks. In: 2019 IEEE 26th Symposium on Computer Arithmetic (ARITH), pp. 88–91. IEEE (2019)
3. Choquette, J., Gandhi, W., Giroux, O., Stam, N., Krashinsky, R.: NVIDIA A100 tensor core GPU: performance and innovation. IEEE Micro 41(2), 29–35 (2021)
4. contributors, W.: BFloat16 floating-point format (2021). https://en.wikipedia.org/wiki/Bfloat16_floating-point_format
5. Das, D., et al.: Mixed precision training of convolutional neural networks using integer operations. arXiv preprint arXiv:1802.00930 (2018)