1. Agarwal, A., Zesch, T.: LTL-UDE at low-resource speech-to-text shared task: investigating Mozilla DeepSpeech in a low-resource setting. In: SwissText/KONVENS, June 2020
2. Gamage, B., Pushpananda, R., Weerasinghe, R., Nadungodage, T.: Usage of combinational acoustic models (DNN-HMM and SGMM) and identifying the impact of language models in Sinhala speech recognition. In: 2020 20th International Conference on Advances in ICT for Emerging Regions (ICTer), November 2020, pp. 17–22. IEEE
3. Håkansson, A., Hoogendijk, K.: Transfer learning for domain specific automatic speech recognition in Swedish: An end-to-end approach using Mozilla’s DeepSpeech (2020). Master thesis in Computer Science and Engineering, Chalmers University of Technology, University of Gothenburg, Sweden 2020. https://odr.chalmers.se/server/api/core/bitstreams/33a4eb91-f2b2-4f0e-842b-88c9d56985b9/content
4. Karunanayake, Y., Thayasivam, U., Ranathunga, S.: Sinhala and Tamil speech intent identification from English phoneme based ASR. In 2019 International Conference on Asian Language Processing (IALP), November 2019, pp. 234–239
5. Kunze, J., Kirsch, L., Kurenkov, I., Krug, A., Johannsmeier, J., Stober, S.: Transfer learning for speech recognition on a budget. arXiv preprint arXiv:1706.00290 (2017)