Author:
Machupalli Raju,Hossain Masum,Mandal Mrinal
Publisher
Springer Nature Singapore
Reference20 articles.
1. Hashemi S, Anthony N, Tann H, Bahar RI, Reda S (2017) Understanding the impact of precision quantization on the accuracy and energy of neural networks. In: Design, automation, and test in Europe. https://doi.org/10.23919/date.2017.7927224
2. Han S, Liu X, Mao H, Pu J, Pedram A, Horowitz MA, Dally WJ (2016) EIE: efficient inference engine on compressed deep neural network. ACM SIGARCH Comput Arch News 44(3):243–254. https://doi.org/10.1145/3007787.3001163
3. Machupalli R, Hossain M, Mandal M (2022) Review of ASIC accelerators for deep neural network. Microprocess Microsyst 89:104441
4. Lai L, Suda N, Chandra V (2017) Deep convolutional neural network inference with floating-point weights and fixed-point activations. ArXiv (Cornell University). https://arxiv.org/pdf/1703.03073.pdf
5. Wang E, Davis J, Zhao R, Ng H, Niu X, Luk W, Cheung PYK, Constan-tinides GA (2019) Deep neural network approximation for custom hardware: where we’ve been, where we’re going. ACM Comput Surv. http://arxiv.org/pdf/1901.06955