Author:
Liu Yigang,Zhao Yue,Xu Xiaona,Xu Liang,Zhang Xubei
Publisher
Springer Nature Switzerland
Reference16 articles.
1. Hendrycks, D., Lee, K., Mazeika, M.: Using pre-training can improve model robustness and uncertainty. In: International Conference on Machine Learning, pp. 2712–2721. PMLR (2019)
2. Fan, Z., Zhou, S., Xu, B. Unsupervised pre-training for sequence to sequence speech recognition (2019)
3. Lech, M., Stolar, M., Best, C., Bolia, R.: Real-Time speech emotion recognition using a pre-trained image classification network: effects of bandwidth reduction and companding. Front. Comput. Sci. 2, 14 (2020). https://doi.org/10.3389/fcomp.2020.00014
4. Bansal, S., Kamper, H., Livescu, K., et al.: Pre-training on high-resource speech recognition improves low-resource speech-to-text translation. arXiv preprint arXiv:1809.01431 (2018)
5. Zhang, W., Li, X., Yang, Y., Dong, R.: Pre-training on mixed data for low-resource neural machine translation. Information 12, 133 (2021)