Author:
Futami Hayato,Inaguma Hirofumi,Ueno Sei,Mimura Masato,Sakai Shinsuke,Kawahara Tatsuya
Cited by
35 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Improving End-to-End Speech Recognition Through Conditional Cross-Modal Knowledge Distillation with Language Model;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30
2. ViLaS: Exploring the Effects of Vision and Language Context in Automatic Speech Recognition;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
3. Multiple Representation Transfer from Large Language Models to End-to-End ASR Systems;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
4. Keep Decoding Parallel With Effective Knowledge Distillation From Language Models To End-To-End Speech Recognisers;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
5. Automatic Speech Recognition with BERT and CTC Transformers: A Review;2023 2nd International Conference on Electronics, Energy and Measurement (IC2EM);2023-11-28