Reference49 articles.
1. Distilling the knowledge in a neural network;Hinton,2015
2. J. Yim, D. Joo, J. Bae, J. Kim, A gift from knowledge distillation: Fast optimization, network minimization and transfer learning, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 4133–4141.
3. Adaptive multi-teacher multi-level knowledge distillation;Liu;Neurocomputing,2020
4. Spot-adaptive knowledge distillation;Song;IEEE Trans. Image Process.,2022
5. Z. Li, X. Li, L. Yang, B. Zhao, R. Song, L. Luo, J. Li, J. Yang, Curriculum temperature for knowledge distillation, in: Proceedings of the AAAI Conference on Artificial Intelligence, 2023, pp. 1504–1512.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献