Abstract
Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices. One possible solution is knowledge distillation whereby a smaller model (student model) is trained by utilizing the information from a larger model (teacher model). In this paper, we present an outlook of knowledge distillation techniques applied to deep learning models. To compare the performances of different techniques, we propose a new metric called distillation metric which compares different knowledge distillation solutions based on models' sizes and accuracy scores. Based on the survey, some interesting conclusions are drawn and presented in this paper including the current challenges and possible research directions.
Funder
King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, Saudi Arabia
Reference74 articles.
1. Finding small-bowel lesions: challenges in endoscopy-image-based learning systems;Ahn;Computer,2018
2. Towards understanding ensemble, knowledge distillation and self-distillation in deep learning;Allen-Zhu;arXiv,2020
3. End-to-end attention-based large vocabulary speech recognition;Bahdanau,2016
4. What is the state of neural network pruning?;Blalock;arXiv,2020
5. Breathing-based authentication on resource-constrained iot devices using recurrent neural networks;Chauhan;Computer,2018
Cited by
41 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献