Author:
Wang Rui,Bai Qibing,Ao Junyi,Zhou Long,Xiong Zhixiang,Wei Zhihua,Zhang Yu,Ko Tom,Li Haizhou
Cited by
25 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Efficiency-oriented approaches for self-supervised speech representation learning;International Journal of Speech Technology;2024-08-19
2. Improving End-to-End Speech Recognition Through Conditional Cross-Modal Knowledge Distillation with Language Model;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30
3. Noise Robust Distillation of Self-Supervised Speech Models via Correlation Metrics;2024 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW);2024-04-14
4. STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14
5. COLLD: Contrastive Layer-to-Layer Distillation for Compressing Multilingual Pre-Trained Speech Encoders;ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP);2024-04-14