Affiliation:
1. Microsoft, Gachibowli, Hyderabad, Telangana, India
Abstract
In recent years, the fields of natural language processing (NLP) and information retrieval (IR) have made tremendous progress thanks to deep learning models like Recurrent Neural Networks (RNNs), Gated Recurrent Units (GRUs) and Long Short-Term Memory (LSTMs) networks, and Transformer [
121
] based models like Bidirectional Encoder Representations from Transformers (BERT) [
24
], Generative Pre-training Transformer (GPT-2) [
95
], Multi-task Deep Neural Network (MT-DNN) [
74
], Extra-Long Network (XLNet) [
135
], Text-to-text transfer transformer (T5) [
96
], T-NLG [
99
], and GShard [
64
]. But these models are humongous in size. On the other hand, real-world applications demand small model size, low response times, and low computational power wattage. In this survey, we discuss six different types of methods (Pruning, Quantization, Knowledge Distillation (KD), Parameter Sharing, Tensor Decomposition, and Sub-quadratic Transformer-based methods) for compression of such models to enable their deployment in real industry NLP projects. Given the critical need of building applications with efficient and small models, and the large amount of recently published work in this area, we believe that this survey organizes the plethora of work done by the “deep learning for NLP” community in the past few years and presents it as a coherent story.
Publisher
Association for Computing Machinery (ACM)
Reference145 articles.
1. Md. Zahangir Alom Adam T. Moody Naoya Maruyama Brian C. Van Essen and Tarek M. Taha. 2018. Effective quantization approaches for recurrent neural networks. In Proceedings of the 2018 International Joint Conference on Neural Networks . IEEE 1–8.
2. Large scale distributed neural network training through online distillation;Anil Rohan;arXiv:1804.03235,2018
3. Hippocampal spine head sizes are highly precise;Bartol Thomas M.;bioRxiv,2015
Cited by
37 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献