Author:
Wang Yanmei,Han Zhi,Yu Siquan,Zhang Shaojie,Liu Baichen,Fan Huijie
Abstract
There exist various methods for transferring knowledge between neural networks, such as parameter transfer, feature sharing, and knowledge distillation. However, these methods are typically applied when transferring knowledge between networks of equal size or from larger networks to smaller ones. Currently, there is a lack of methods for transferring knowledge from shallower networks to deeper ones, which is crucial in real-world scenarios such as system upgrades where network size increases for better performance. End-to-end training is the commonly used method for network training. However, in this training strategy, the deeper network cannot inherit the knowledge from the existing shallower network. As a result, not only is the flexibility of the network limited but there is also a significant waste of computing power and time. Therefore, it is imperative to develop new methods that enable the transfer of knowledge from shallower to deeper networks. To address the aforementioned issue, we propose an depth incremental learning strategy (DILS). It starts from a shallower net and deepens the net gradually by inserting new layers each time until reaching requested performance. We also derive an analytical method and a network approximation method for training new added parameters to guarantee the new deeper net can inherit the knowledge learned by the old shallower net. It enables knowledge transfer from smaller to larger networks and provides good initialization of layers in the larger network to stabilize the performance of large models and accelerate their training process. Its reasonability can be guaranteed by information projection theory and is verified by a series of synthetic and real-data experiments.
Reference34 articles.
1. Designing neural network architectures using reinforcement learning;Baker;arXiv,2016
2. “Teacher guided architecture search,”;Bashivan;Proceedings of the IEEE/CVF International Conference on Computer Vision,2019
3. “Blind signal decompositions for automatic transcription of polyphonic music: Nmf and k-svd on the benchmark,”;Bertin,2007
4. Return of the devil in the details: delving deep into convolutional nets;Chatfield;arXiv,2014
5. “It's all in the teacher: zero-shot quantization brought closer to the teacher,”;Choi;Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition,2022