Author:
Takashima Ryoichi,Li Sheng,Kawai Hisashi
Cited by
35 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Factorized and progressive knowledge distillation for CTC-based ASR models;Speech Communication;2024-05
2. Progressive Unsupervised Domain Adaptation for ASR Using Ensemble Models and Multi-Stage Training;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
3. Distilling Hubert with LSTMs via Decoupled Knowledge Distillation;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Knowledge Distillation for Memory-Efficient On-Board Image Classification of Mars Imagery;IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium;2023-07-16
5. Robust Knowledge Distillation from RNN-T Models with Noisy Training Labels Using Full-Sum Loss;ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2023-06-04