Affiliation:
1. Department of Computer Science and Applications, The Gandhigram Rural Institute (Deemed to be University), Dindigul, Tamil Nadu, India
Abstract
The emergence of powerful deep learning architectures has resulted in breakthrough innovations in several fields such as healthcare, precision farming, banking, education, and much more. Despite the advantages, there are limitations in deploying deep learning models in resource-constrained devices due to their huge memory size. This research work reports an innovative hybrid compression pipeline for compressing neural networks exploiting the untapped potential of z-score in weight pruning, followed by quantization using DBSCAN clustering and Huffman encoding. The proposed model has been experimented with state-of-the-art LeNet Deep Neural Network architectures using the standard MNIST and CIFAR datasets. Experimental results prove the compression performance of DeepCompNet by 26x without compromising the accuracy. The synergistic blend of the compression algorithms in the proposed model will ensure effortless deployment of neural networks leveraging DL applications in memory-constrained devices.
Subject
General Mathematics,General Medicine,General Neuroscience,General Computer Science
Reference40 articles.
1. Deep compression: compressing deep neural networks with pruning, trained quantization and huffman coding;S. Han,2015
2. SqueezeNet: alexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size;F. N. Iandola,2016
3. Neural network compression using transform coding and clustering;T. Laude,2018
4. Compressing deep neural networks with sparse matrix factorization;K. Wu;IEEE Transactions on Neural Networks and Learning Systems,2019
5. Matrix multiplication by neuromorphic computing
Cited by
12 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献